2026-04-09 01:37:43.581495 | Job console starting 2026-04-09 01:37:43.589280 | Updating git repos 2026-04-09 01:37:43.680686 | Cloning repos into workspace 2026-04-09 01:37:43.888981 | Restoring repo states 2026-04-09 01:37:43.909760 | Merging changes 2026-04-09 01:37:43.909799 | Checking out repos 2026-04-09 01:37:44.142790 | Preparing playbooks 2026-04-09 01:37:44.786350 | Running Ansible setup 2026-04-09 01:37:49.396182 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-09 01:37:50.141559 | 2026-04-09 01:37:50.141733 | PLAY [Base pre] 2026-04-09 01:37:50.159018 | 2026-04-09 01:37:50.159164 | TASK [Setup log path fact] 2026-04-09 01:37:50.196443 | orchestrator | ok 2026-04-09 01:37:50.217405 | 2026-04-09 01:37:50.217585 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-09 01:37:50.255651 | orchestrator | ok 2026-04-09 01:37:50.271502 | 2026-04-09 01:37:50.271641 | TASK [emit-job-header : Print job information] 2026-04-09 01:37:50.320086 | # Job Information 2026-04-09 01:37:50.320381 | Ansible Version: 2.16.14 2026-04-09 01:37:50.320442 | Job: testbed-upgrade-stable-ubuntu-24.04 2026-04-09 01:37:50.320500 | Pipeline: periodic-midnight 2026-04-09 01:37:50.320541 | Executor: 521e9411259a 2026-04-09 01:37:50.320577 | Triggered by: https://github.com/osism/testbed 2026-04-09 01:37:50.320616 | Event ID: 229a3ccad3314f149ff7c6cbe4e5e7b7 2026-04-09 01:37:50.331129 | 2026-04-09 01:37:50.331275 | LOOP [emit-job-header : Print node information] 2026-04-09 01:37:50.460307 | orchestrator | ok: 2026-04-09 01:37:50.460594 | orchestrator | # Node Information 2026-04-09 01:37:50.460654 | orchestrator | Inventory Hostname: orchestrator 2026-04-09 01:37:50.460697 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-09 01:37:50.460737 | orchestrator | Username: zuul-testbed06 2026-04-09 01:37:50.460775 | orchestrator | Distro: Debian 12.13 2026-04-09 01:37:50.460818 | orchestrator | Provider: static-testbed 2026-04-09 01:37:50.460863 | orchestrator | Region: 2026-04-09 01:37:50.460901 | orchestrator | Label: testbed-orchestrator 2026-04-09 01:37:50.460953 | orchestrator | Product Name: OpenStack Nova 2026-04-09 01:37:50.460990 | orchestrator | Interface IP: 81.163.193.140 2026-04-09 01:37:50.484615 | 2026-04-09 01:37:50.484813 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-09 01:37:50.966925 | orchestrator -> localhost | changed 2026-04-09 01:37:50.982577 | 2026-04-09 01:37:50.982753 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-09 01:37:52.113848 | orchestrator -> localhost | changed 2026-04-09 01:37:52.137167 | 2026-04-09 01:37:52.137309 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-09 01:37:52.444519 | orchestrator -> localhost | ok 2026-04-09 01:37:52.451823 | 2026-04-09 01:37:52.451985 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-09 01:37:52.481454 | orchestrator | ok 2026-04-09 01:37:52.497746 | orchestrator | included: /var/lib/zuul/builds/efba89e74a524d7d8e2931de160f209f/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-09 01:37:52.505856 | 2026-04-09 01:37:52.505971 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-09 01:37:53.980782 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-09 01:37:53.981300 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/efba89e74a524d7d8e2931de160f209f/work/efba89e74a524d7d8e2931de160f209f_id_rsa 2026-04-09 01:37:53.981412 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/efba89e74a524d7d8e2931de160f209f/work/efba89e74a524d7d8e2931de160f209f_id_rsa.pub 2026-04-09 01:37:53.981488 | orchestrator -> localhost | The key fingerprint is: 2026-04-09 01:37:53.981555 | orchestrator -> localhost | SHA256:D/YwHRCMk/f2vFKT/ie+62FBigIlR/USnHEgweUVdw8 zuul-build-sshkey 2026-04-09 01:37:53.981618 | orchestrator -> localhost | The key's randomart image is: 2026-04-09 01:37:53.981705 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-09 01:37:53.981770 | orchestrator -> localhost | | .*O**++.E .| 2026-04-09 01:37:53.981834 | orchestrator -> localhost | | ++++o= . o.| 2026-04-09 01:37:53.981894 | orchestrator -> localhost | | .o .+ . . .| 2026-04-09 01:37:53.981978 | orchestrator -> localhost | | . .o+ o | 2026-04-09 01:37:53.982069 | orchestrator -> localhost | | S.oo... | 2026-04-09 01:37:53.982143 | orchestrator -> localhost | | . B * . | 2026-04-09 01:37:53.982202 | orchestrator -> localhost | | oo oo | 2026-04-09 01:37:53.982259 | orchestrator -> localhost | | . o....| 2026-04-09 01:37:53.982319 | orchestrator -> localhost | | . +*= | 2026-04-09 01:37:53.982378 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-09 01:37:53.982517 | orchestrator -> localhost | ok: Runtime: 0:00:00.978413 2026-04-09 01:37:53.997686 | 2026-04-09 01:37:53.997845 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-09 01:37:54.035532 | orchestrator | ok 2026-04-09 01:37:54.049995 | orchestrator | included: /var/lib/zuul/builds/efba89e74a524d7d8e2931de160f209f/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-09 01:37:54.060039 | 2026-04-09 01:37:54.060137 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-09 01:37:54.083626 | orchestrator | skipping: Conditional result was False 2026-04-09 01:37:54.092692 | 2026-04-09 01:37:54.092816 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-09 01:37:54.686571 | orchestrator | changed 2026-04-09 01:37:54.694907 | 2026-04-09 01:37:54.695085 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-09 01:37:54.993802 | orchestrator | ok 2026-04-09 01:37:55.002316 | 2026-04-09 01:37:55.002443 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-09 01:37:55.487769 | orchestrator | ok 2026-04-09 01:37:55.496564 | 2026-04-09 01:37:55.496712 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-09 01:37:55.941113 | orchestrator | ok 2026-04-09 01:37:55.949838 | 2026-04-09 01:37:55.950019 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-09 01:37:55.984634 | orchestrator | skipping: Conditional result was False 2026-04-09 01:37:55.999263 | 2026-04-09 01:37:55.999417 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-09 01:37:56.476267 | orchestrator -> localhost | changed 2026-04-09 01:37:56.501458 | 2026-04-09 01:37:56.501617 | TASK [add-build-sshkey : Add back temp key] 2026-04-09 01:37:56.840803 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/efba89e74a524d7d8e2931de160f209f/work/efba89e74a524d7d8e2931de160f209f_id_rsa (zuul-build-sshkey) 2026-04-09 01:37:56.841375 | orchestrator -> localhost | ok: Runtime: 0:00:00.022020 2026-04-09 01:37:56.856985 | 2026-04-09 01:37:56.857143 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-09 01:37:57.314543 | orchestrator | ok 2026-04-09 01:37:57.325359 | 2026-04-09 01:37:57.325511 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-09 01:37:57.354426 | orchestrator | skipping: Conditional result was False 2026-04-09 01:37:57.412663 | 2026-04-09 01:37:57.412794 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-09 01:37:57.847088 | orchestrator | ok 2026-04-09 01:37:57.862708 | 2026-04-09 01:37:57.862869 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-09 01:37:57.906672 | orchestrator | ok 2026-04-09 01:37:57.916392 | 2026-04-09 01:37:57.916508 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-09 01:37:58.223445 | orchestrator -> localhost | ok 2026-04-09 01:37:58.231174 | 2026-04-09 01:37:58.231291 | TASK [validate-host : Collect information about the host] 2026-04-09 01:37:59.489281 | orchestrator | ok 2026-04-09 01:37:59.506407 | 2026-04-09 01:37:59.506535 | TASK [validate-host : Sanitize hostname] 2026-04-09 01:37:59.573095 | orchestrator | ok 2026-04-09 01:37:59.581311 | 2026-04-09 01:37:59.581453 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-09 01:38:00.174268 | orchestrator -> localhost | changed 2026-04-09 01:38:00.183212 | 2026-04-09 01:38:00.183365 | TASK [validate-host : Collect information about zuul worker] 2026-04-09 01:38:00.648339 | orchestrator | ok 2026-04-09 01:38:00.654293 | 2026-04-09 01:38:00.654414 | TASK [validate-host : Write out all zuul information for each host] 2026-04-09 01:38:01.216392 | orchestrator -> localhost | changed 2026-04-09 01:38:01.236407 | 2026-04-09 01:38:01.236553 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-09 01:38:01.610290 | orchestrator | ok 2026-04-09 01:38:01.619850 | 2026-04-09 01:38:01.620047 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-09 01:38:30.878112 | orchestrator | changed: 2026-04-09 01:38:30.878395 | orchestrator | .d..t...... src/ 2026-04-09 01:38:30.878448 | orchestrator | .d..t...... src/github.com/ 2026-04-09 01:38:30.878485 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-09 01:38:30.878518 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-09 01:38:30.878547 | orchestrator | RedHat.yml 2026-04-09 01:38:30.895886 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-09 01:38:30.895903 | orchestrator | RedHat.yml 2026-04-09 01:38:30.895986 | orchestrator | = 1.53.0"... 2026-04-09 01:38:42.356326 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-04-09 01:38:42.528103 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-09 01:38:42.944917 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-09 01:38:43.020604 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-09 01:38:43.716614 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-09 01:38:43.791173 | orchestrator | - Installing hashicorp/local v2.8.0... 2026-04-09 01:38:44.265544 | orchestrator | - Installed hashicorp/local v2.8.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-09 01:38:44.265637 | orchestrator | 2026-04-09 01:38:44.265651 | orchestrator | Providers are signed by their developers. 2026-04-09 01:38:44.265662 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-09 01:38:44.265672 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-09 01:38:44.265716 | orchestrator | 2026-04-09 01:38:44.265731 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-09 01:38:44.265754 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-09 01:38:44.265791 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-09 01:38:44.265804 | orchestrator | you run "tofu init" in the future. 2026-04-09 01:38:44.265921 | orchestrator | 2026-04-09 01:38:44.265942 | orchestrator | OpenTofu has been successfully initialized! 2026-04-09 01:38:44.265954 | orchestrator | 2026-04-09 01:38:44.265966 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-09 01:38:44.265977 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-09 01:38:44.265997 | orchestrator | should now work. 2026-04-09 01:38:44.266009 | orchestrator | 2026-04-09 01:38:44.266057 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-09 01:38:44.266068 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-09 01:38:44.266080 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-09 01:38:44.481427 | orchestrator | Created and switched to workspace "ci"! 2026-04-09 01:38:44.481582 | orchestrator | 2026-04-09 01:38:44.481603 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-09 01:38:44.481617 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-09 01:38:44.481629 | orchestrator | for this configuration. 2026-04-09 01:38:44.626896 | orchestrator | ci.auto.tfvars 2026-04-09 01:38:44.631188 | orchestrator | default_custom.tf 2026-04-09 01:38:45.637160 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-09 01:38:46.172063 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-09 01:38:46.461514 | orchestrator | 2026-04-09 01:38:46.461595 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-09 01:38:46.461609 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-09 01:38:46.461617 | orchestrator | + create 2026-04-09 01:38:46.461628 | orchestrator | <= read (data resources) 2026-04-09 01:38:46.461636 | orchestrator | 2026-04-09 01:38:46.461644 | orchestrator | OpenTofu will perform the following actions: 2026-04-09 01:38:46.461665 | orchestrator | 2026-04-09 01:38:46.461673 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-09 01:38:46.461681 | orchestrator | # (config refers to values not yet known) 2026-04-09 01:38:46.461686 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-09 01:38:46.461691 | orchestrator | + checksum = (known after apply) 2026-04-09 01:38:46.461696 | orchestrator | + created_at = (known after apply) 2026-04-09 01:38:46.461700 | orchestrator | + file = (known after apply) 2026-04-09 01:38:46.461705 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.461729 | orchestrator | + metadata = (known after apply) 2026-04-09 01:38:46.461734 | orchestrator | + min_disk_gb = (known after apply) 2026-04-09 01:38:46.461738 | orchestrator | + min_ram_mb = (known after apply) 2026-04-09 01:38:46.461742 | orchestrator | + most_recent = true 2026-04-09 01:38:46.461747 | orchestrator | + name = (known after apply) 2026-04-09 01:38:46.461751 | orchestrator | + protected = (known after apply) 2026-04-09 01:38:46.461756 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.461764 | orchestrator | + schema = (known after apply) 2026-04-09 01:38:46.461769 | orchestrator | + size_bytes = (known after apply) 2026-04-09 01:38:46.461773 | orchestrator | + tags = (known after apply) 2026-04-09 01:38:46.461777 | orchestrator | + updated_at = (known after apply) 2026-04-09 01:38:46.461782 | orchestrator | } 2026-04-09 01:38:46.461786 | orchestrator | 2026-04-09 01:38:46.461790 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-09 01:38:46.461795 | orchestrator | # (config refers to values not yet known) 2026-04-09 01:38:46.461799 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-09 01:38:46.461803 | orchestrator | + checksum = (known after apply) 2026-04-09 01:38:46.461808 | orchestrator | + created_at = (known after apply) 2026-04-09 01:38:46.461812 | orchestrator | + file = (known after apply) 2026-04-09 01:38:46.461816 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.461821 | orchestrator | + metadata = (known after apply) 2026-04-09 01:38:46.461825 | orchestrator | + min_disk_gb = (known after apply) 2026-04-09 01:38:46.461829 | orchestrator | + min_ram_mb = (known after apply) 2026-04-09 01:38:46.461833 | orchestrator | + most_recent = true 2026-04-09 01:38:46.461837 | orchestrator | + name = (known after apply) 2026-04-09 01:38:46.461841 | orchestrator | + protected = (known after apply) 2026-04-09 01:38:46.461846 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.461850 | orchestrator | + schema = (known after apply) 2026-04-09 01:38:46.461854 | orchestrator | + size_bytes = (known after apply) 2026-04-09 01:38:46.461858 | orchestrator | + tags = (known after apply) 2026-04-09 01:38:46.461862 | orchestrator | + updated_at = (known after apply) 2026-04-09 01:38:46.461866 | orchestrator | } 2026-04-09 01:38:46.461873 | orchestrator | 2026-04-09 01:38:46.461878 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-09 01:38:46.461882 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-09 01:38:46.461887 | orchestrator | + content = (known after apply) 2026-04-09 01:38:46.461892 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-09 01:38:46.461899 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-09 01:38:46.461906 | orchestrator | + content_md5 = (known after apply) 2026-04-09 01:38:46.461913 | orchestrator | + content_sha1 = (known after apply) 2026-04-09 01:38:46.461919 | orchestrator | + content_sha256 = (known after apply) 2026-04-09 01:38:46.461926 | orchestrator | + content_sha512 = (known after apply) 2026-04-09 01:38:46.461933 | orchestrator | + directory_permission = "0777" 2026-04-09 01:38:46.461940 | orchestrator | + file_permission = "0644" 2026-04-09 01:38:46.461947 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-09 01:38:46.461954 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.461961 | orchestrator | } 2026-04-09 01:38:46.461968 | orchestrator | 2026-04-09 01:38:46.461975 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-09 01:38:46.461981 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-09 01:38:46.461988 | orchestrator | + content = (known after apply) 2026-04-09 01:38:46.461992 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-09 01:38:46.461996 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-09 01:38:46.462000 | orchestrator | + content_md5 = (known after apply) 2026-04-09 01:38:46.462005 | orchestrator | + content_sha1 = (known after apply) 2026-04-09 01:38:46.462009 | orchestrator | + content_sha256 = (known after apply) 2026-04-09 01:38:46.462013 | orchestrator | + content_sha512 = (known after apply) 2026-04-09 01:38:46.462038 | orchestrator | + directory_permission = "0777" 2026-04-09 01:38:46.462043 | orchestrator | + file_permission = "0644" 2026-04-09 01:38:46.462053 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-09 01:38:46.462057 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.462062 | orchestrator | } 2026-04-09 01:38:46.462066 | orchestrator | 2026-04-09 01:38:46.462078 | orchestrator | # local_file.inventory will be created 2026-04-09 01:38:46.462082 | orchestrator | + resource "local_file" "inventory" { 2026-04-09 01:38:46.462086 | orchestrator | + content = (known after apply) 2026-04-09 01:38:46.462091 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-09 01:38:46.462095 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-09 01:38:46.462099 | orchestrator | + content_md5 = (known after apply) 2026-04-09 01:38:46.462103 | orchestrator | + content_sha1 = (known after apply) 2026-04-09 01:38:46.462108 | orchestrator | + content_sha256 = (known after apply) 2026-04-09 01:38:46.462112 | orchestrator | + content_sha512 = (known after apply) 2026-04-09 01:38:46.462116 | orchestrator | + directory_permission = "0777" 2026-04-09 01:38:46.462120 | orchestrator | + file_permission = "0644" 2026-04-09 01:38:46.462124 | orchestrator | + filename = "inventory.ci" 2026-04-09 01:38:46.462129 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.462133 | orchestrator | } 2026-04-09 01:38:46.462137 | orchestrator | 2026-04-09 01:38:46.462141 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-09 01:38:46.462146 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-09 01:38:46.462150 | orchestrator | + content = (sensitive value) 2026-04-09 01:38:46.462154 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-09 01:38:46.462159 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-09 01:38:46.462163 | orchestrator | + content_md5 = (known after apply) 2026-04-09 01:38:46.462167 | orchestrator | + content_sha1 = (known after apply) 2026-04-09 01:38:46.462171 | orchestrator | + content_sha256 = (known after apply) 2026-04-09 01:38:46.462175 | orchestrator | + content_sha512 = (known after apply) 2026-04-09 01:38:46.462179 | orchestrator | + directory_permission = "0700" 2026-04-09 01:38:46.462184 | orchestrator | + file_permission = "0600" 2026-04-09 01:38:46.462188 | orchestrator | + filename = ".id_rsa.ci" 2026-04-09 01:38:46.462192 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.462196 | orchestrator | } 2026-04-09 01:38:46.462201 | orchestrator | 2026-04-09 01:38:46.462246 | orchestrator | # null_resource.node_semaphore will be created 2026-04-09 01:38:46.462250 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-09 01:38:46.462255 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.462259 | orchestrator | } 2026-04-09 01:38:46.462274 | orchestrator | 2026-04-09 01:38:46.462278 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-09 01:38:46.462283 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-09 01:38:46.462287 | orchestrator | + attachment = (known after apply) 2026-04-09 01:38:46.462291 | orchestrator | + availability_zone = "nova" 2026-04-09 01:38:46.462296 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.462300 | orchestrator | + image_id = (known after apply) 2026-04-09 01:38:46.462304 | orchestrator | + metadata = (known after apply) 2026-04-09 01:38:46.462309 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-09 01:38:46.462313 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.462317 | orchestrator | + size = 80 2026-04-09 01:38:46.462321 | orchestrator | + volume_retype_policy = "never" 2026-04-09 01:38:46.462325 | orchestrator | + volume_type = "ssd" 2026-04-09 01:38:46.462329 | orchestrator | } 2026-04-09 01:38:46.462333 | orchestrator | 2026-04-09 01:38:46.462338 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-09 01:38:46.462342 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-09 01:38:46.462346 | orchestrator | + attachment = (known after apply) 2026-04-09 01:38:46.462350 | orchestrator | + availability_zone = "nova" 2026-04-09 01:38:46.462354 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.462363 | orchestrator | + image_id = (known after apply) 2026-04-09 01:38:46.462370 | orchestrator | + metadata = (known after apply) 2026-04-09 01:38:46.462376 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-09 01:38:46.462382 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.462389 | orchestrator | + size = 80 2026-04-09 01:38:46.462396 | orchestrator | + volume_retype_policy = "never" 2026-04-09 01:38:46.462402 | orchestrator | + volume_type = "ssd" 2026-04-09 01:38:46.462409 | orchestrator | } 2026-04-09 01:38:46.462414 | orchestrator | 2026-04-09 01:38:46.462418 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-09 01:38:46.462422 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-09 01:38:46.462426 | orchestrator | + attachment = (known after apply) 2026-04-09 01:38:46.462430 | orchestrator | + availability_zone = "nova" 2026-04-09 01:38:46.462435 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.462439 | orchestrator | + image_id = (known after apply) 2026-04-09 01:38:46.462443 | orchestrator | + metadata = (known after apply) 2026-04-09 01:38:46.462447 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-09 01:38:46.462453 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.462459 | orchestrator | + size = 80 2026-04-09 01:38:46.462466 | orchestrator | + volume_retype_policy = "never" 2026-04-09 01:38:46.462473 | orchestrator | + volume_type = "ssd" 2026-04-09 01:38:46.462479 | orchestrator | } 2026-04-09 01:38:46.462486 | orchestrator | 2026-04-09 01:38:46.462492 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-09 01:38:46.462499 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-09 01:38:46.462506 | orchestrator | + attachment = (known after apply) 2026-04-09 01:38:46.462512 | orchestrator | + availability_zone = "nova" 2026-04-09 01:38:46.462520 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.462525 | orchestrator | + image_id = (known after apply) 2026-04-09 01:38:46.462529 | orchestrator | + metadata = (known after apply) 2026-04-09 01:38:46.462533 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-09 01:38:46.462538 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.462542 | orchestrator | + size = 80 2026-04-09 01:38:46.462546 | orchestrator | + volume_retype_policy = "never" 2026-04-09 01:38:46.462550 | orchestrator | + volume_type = "ssd" 2026-04-09 01:38:46.462554 | orchestrator | } 2026-04-09 01:38:46.462558 | orchestrator | 2026-04-09 01:38:46.462563 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-09 01:38:46.462567 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-09 01:38:46.462571 | orchestrator | + attachment = (known after apply) 2026-04-09 01:38:46.462575 | orchestrator | + availability_zone = "nova" 2026-04-09 01:38:46.462579 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.462584 | orchestrator | + image_id = (known after apply) 2026-04-09 01:38:46.462588 | orchestrator | + metadata = (known after apply) 2026-04-09 01:38:46.462595 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-09 01:38:46.462600 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.462604 | orchestrator | + size = 80 2026-04-09 01:38:46.462608 | orchestrator | + volume_retype_policy = "never" 2026-04-09 01:38:46.462612 | orchestrator | + volume_type = "ssd" 2026-04-09 01:38:46.462617 | orchestrator | } 2026-04-09 01:38:46.462621 | orchestrator | 2026-04-09 01:38:46.462625 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-09 01:38:46.462629 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-09 01:38:46.462633 | orchestrator | + attachment = (known after apply) 2026-04-09 01:38:46.462637 | orchestrator | + availability_zone = "nova" 2026-04-09 01:38:46.462642 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.462650 | orchestrator | + image_id = (known after apply) 2026-04-09 01:38:46.462654 | orchestrator | + metadata = (known after apply) 2026-04-09 01:38:46.462658 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-09 01:38:46.462663 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.462667 | orchestrator | + size = 80 2026-04-09 01:38:46.462673 | orchestrator | + volume_retype_policy = "never" 2026-04-09 01:38:46.462680 | orchestrator | + volume_type = "ssd" 2026-04-09 01:38:46.462686 | orchestrator | } 2026-04-09 01:38:46.462693 | orchestrator | 2026-04-09 01:38:46.462700 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-09 01:38:46.462707 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-09 01:38:46.462713 | orchestrator | + attachment = (known after apply) 2026-04-09 01:38:46.462720 | orchestrator | + availability_zone = "nova" 2026-04-09 01:38:46.462727 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.462734 | orchestrator | + image_id = (known after apply) 2026-04-09 01:38:46.462741 | orchestrator | + metadata = (known after apply) 2026-04-09 01:38:46.462752 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-09 01:38:46.462759 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.462765 | orchestrator | + size = 80 2026-04-09 01:38:46.462772 | orchestrator | + volume_retype_policy = "never" 2026-04-09 01:38:46.462778 | orchestrator | + volume_type = "ssd" 2026-04-09 01:38:46.462785 | orchestrator | } 2026-04-09 01:38:46.462792 | orchestrator | 2026-04-09 01:38:46.462799 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-09 01:38:46.462806 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 01:38:46.462811 | orchestrator | + attachment = (known after apply) 2026-04-09 01:38:46.462816 | orchestrator | + availability_zone = "nova" 2026-04-09 01:38:46.462820 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.462824 | orchestrator | + metadata = (known after apply) 2026-04-09 01:38:46.462828 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-09 01:38:46.462833 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.462837 | orchestrator | + size = 20 2026-04-09 01:38:46.462841 | orchestrator | + volume_retype_policy = "never" 2026-04-09 01:38:46.462845 | orchestrator | + volume_type = "ssd" 2026-04-09 01:38:46.462849 | orchestrator | } 2026-04-09 01:38:46.462853 | orchestrator | 2026-04-09 01:38:46.462858 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-09 01:38:46.462862 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 01:38:46.462866 | orchestrator | + attachment = (known after apply) 2026-04-09 01:38:46.462870 | orchestrator | + availability_zone = "nova" 2026-04-09 01:38:46.462874 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.462878 | orchestrator | + metadata = (known after apply) 2026-04-09 01:38:46.462882 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-09 01:38:46.462886 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.462891 | orchestrator | + size = 20 2026-04-09 01:38:46.462895 | orchestrator | + volume_retype_policy = "never" 2026-04-09 01:38:46.462899 | orchestrator | + volume_type = "ssd" 2026-04-09 01:38:46.462903 | orchestrator | } 2026-04-09 01:38:46.462907 | orchestrator | 2026-04-09 01:38:46.462911 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-09 01:38:46.462916 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 01:38:46.462920 | orchestrator | + attachment = (known after apply) 2026-04-09 01:38:46.462924 | orchestrator | + availability_zone = "nova" 2026-04-09 01:38:46.462928 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.462932 | orchestrator | + metadata = (known after apply) 2026-04-09 01:38:46.462936 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-09 01:38:46.462940 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.462953 | orchestrator | + size = 20 2026-04-09 01:38:46.462958 | orchestrator | + volume_retype_policy = "never" 2026-04-09 01:38:46.462962 | orchestrator | + volume_type = "ssd" 2026-04-09 01:38:46.462966 | orchestrator | } 2026-04-09 01:38:46.462970 | orchestrator | 2026-04-09 01:38:46.462974 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-09 01:38:46.462978 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 01:38:46.462983 | orchestrator | + attachment = (known after apply) 2026-04-09 01:38:46.462987 | orchestrator | + availability_zone = "nova" 2026-04-09 01:38:46.462991 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.462995 | orchestrator | + metadata = (known after apply) 2026-04-09 01:38:46.462999 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-09 01:38:46.463004 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.463008 | orchestrator | + size = 20 2026-04-09 01:38:46.463012 | orchestrator | + volume_retype_policy = "never" 2026-04-09 01:38:46.463016 | orchestrator | + volume_type = "ssd" 2026-04-09 01:38:46.463020 | orchestrator | } 2026-04-09 01:38:46.463025 | orchestrator | 2026-04-09 01:38:46.463029 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-09 01:38:46.463033 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 01:38:46.463037 | orchestrator | + attachment = (known after apply) 2026-04-09 01:38:46.463041 | orchestrator | + availability_zone = "nova" 2026-04-09 01:38:46.463046 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.463050 | orchestrator | + metadata = (known after apply) 2026-04-09 01:38:46.463054 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-09 01:38:46.463058 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.463066 | orchestrator | + size = 20 2026-04-09 01:38:46.463070 | orchestrator | + volume_retype_policy = "never" 2026-04-09 01:38:46.463074 | orchestrator | + volume_type = "ssd" 2026-04-09 01:38:46.463079 | orchestrator | } 2026-04-09 01:38:46.463083 | orchestrator | 2026-04-09 01:38:46.463087 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-09 01:38:46.463091 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 01:38:46.463095 | orchestrator | + attachment = (known after apply) 2026-04-09 01:38:46.463099 | orchestrator | + availability_zone = "nova" 2026-04-09 01:38:46.463103 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.463108 | orchestrator | + metadata = (known after apply) 2026-04-09 01:38:46.463112 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-09 01:38:46.463116 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.463120 | orchestrator | + size = 20 2026-04-09 01:38:46.463124 | orchestrator | + volume_retype_policy = "never" 2026-04-09 01:38:46.463129 | orchestrator | + volume_type = "ssd" 2026-04-09 01:38:46.463133 | orchestrator | } 2026-04-09 01:38:46.463137 | orchestrator | 2026-04-09 01:38:46.463141 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-09 01:38:46.463145 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 01:38:46.463149 | orchestrator | + attachment = (known after apply) 2026-04-09 01:38:46.463154 | orchestrator | + availability_zone = "nova" 2026-04-09 01:38:46.463158 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.463162 | orchestrator | + metadata = (known after apply) 2026-04-09 01:38:46.463166 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-09 01:38:46.463170 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.463174 | orchestrator | + size = 20 2026-04-09 01:38:46.463178 | orchestrator | + volume_retype_policy = "never" 2026-04-09 01:38:46.463183 | orchestrator | + volume_type = "ssd" 2026-04-09 01:38:46.463187 | orchestrator | } 2026-04-09 01:38:46.463191 | orchestrator | 2026-04-09 01:38:46.463199 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-09 01:38:46.463217 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 01:38:46.463226 | orchestrator | + attachment = (known after apply) 2026-04-09 01:38:46.463230 | orchestrator | + availability_zone = "nova" 2026-04-09 01:38:46.463234 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.463239 | orchestrator | + metadata = (known after apply) 2026-04-09 01:38:46.463243 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-09 01:38:46.463247 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.463251 | orchestrator | + size = 20 2026-04-09 01:38:46.463255 | orchestrator | + volume_retype_policy = "never" 2026-04-09 01:38:46.463260 | orchestrator | + volume_type = "ssd" 2026-04-09 01:38:46.463264 | orchestrator | } 2026-04-09 01:38:46.463268 | orchestrator | 2026-04-09 01:38:46.463272 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-09 01:38:46.463276 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 01:38:46.463281 | orchestrator | + attachment = (known after apply) 2026-04-09 01:38:46.463285 | orchestrator | + availability_zone = "nova" 2026-04-09 01:38:46.463289 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.463293 | orchestrator | + metadata = (known after apply) 2026-04-09 01:38:46.463297 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-09 01:38:46.463302 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.463306 | orchestrator | + size = 20 2026-04-09 01:38:46.463310 | orchestrator | + volume_retype_policy = "never" 2026-04-09 01:38:46.463314 | orchestrator | + volume_type = "ssd" 2026-04-09 01:38:46.463318 | orchestrator | } 2026-04-09 01:38:46.463322 | orchestrator | 2026-04-09 01:38:46.463326 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-09 01:38:46.463331 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-09 01:38:46.463335 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 01:38:46.463339 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 01:38:46.463343 | orchestrator | + all_metadata = (known after apply) 2026-04-09 01:38:46.463347 | orchestrator | + all_tags = (known after apply) 2026-04-09 01:38:46.463351 | orchestrator | + availability_zone = "nova" 2026-04-09 01:38:46.463356 | orchestrator | + config_drive = true 2026-04-09 01:38:46.463360 | orchestrator | + created = (known after apply) 2026-04-09 01:38:46.463364 | orchestrator | + flavor_id = (known after apply) 2026-04-09 01:38:46.463368 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-09 01:38:46.463372 | orchestrator | + force_delete = false 2026-04-09 01:38:46.463377 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 01:38:46.463381 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.463385 | orchestrator | + image_id = (known after apply) 2026-04-09 01:38:46.463389 | orchestrator | + image_name = (known after apply) 2026-04-09 01:38:46.463393 | orchestrator | + key_pair = "testbed" 2026-04-09 01:38:46.463397 | orchestrator | + name = "testbed-manager" 2026-04-09 01:38:46.463401 | orchestrator | + power_state = "active" 2026-04-09 01:38:46.463406 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.463410 | orchestrator | + security_groups = (known after apply) 2026-04-09 01:38:46.463414 | orchestrator | + stop_before_destroy = false 2026-04-09 01:38:46.463418 | orchestrator | + updated = (known after apply) 2026-04-09 01:38:46.463423 | orchestrator | + user_data = (sensitive value) 2026-04-09 01:38:46.463427 | orchestrator | 2026-04-09 01:38:46.463431 | orchestrator | + block_device { 2026-04-09 01:38:46.463435 | orchestrator | + boot_index = 0 2026-04-09 01:38:46.463440 | orchestrator | + delete_on_termination = false 2026-04-09 01:38:46.463447 | orchestrator | + destination_type = "volume" 2026-04-09 01:38:46.463451 | orchestrator | + multiattach = false 2026-04-09 01:38:46.463456 | orchestrator | + source_type = "volume" 2026-04-09 01:38:46.463460 | orchestrator | + uuid = (known after apply) 2026-04-09 01:38:46.463468 | orchestrator | } 2026-04-09 01:38:46.463472 | orchestrator | 2026-04-09 01:38:46.463476 | orchestrator | + network { 2026-04-09 01:38:46.463481 | orchestrator | + access_network = false 2026-04-09 01:38:46.463485 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 01:38:46.463490 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 01:38:46.463496 | orchestrator | + mac = (known after apply) 2026-04-09 01:38:46.463502 | orchestrator | + name = (known after apply) 2026-04-09 01:38:46.463508 | orchestrator | + port = (known after apply) 2026-04-09 01:38:46.463514 | orchestrator | + uuid = (known after apply) 2026-04-09 01:38:46.463522 | orchestrator | } 2026-04-09 01:38:46.463528 | orchestrator | } 2026-04-09 01:38:46.463536 | orchestrator | 2026-04-09 01:38:46.463542 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-09 01:38:46.463549 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-09 01:38:46.463555 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 01:38:46.463562 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 01:38:46.463568 | orchestrator | + all_metadata = (known after apply) 2026-04-09 01:38:46.463573 | orchestrator | + all_tags = (known after apply) 2026-04-09 01:38:46.463579 | orchestrator | + availability_zone = "nova" 2026-04-09 01:38:46.463585 | orchestrator | + config_drive = true 2026-04-09 01:38:46.463591 | orchestrator | + created = (known after apply) 2026-04-09 01:38:46.463598 | orchestrator | + flavor_id = (known after apply) 2026-04-09 01:38:46.463604 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-09 01:38:46.463610 | orchestrator | + force_delete = false 2026-04-09 01:38:46.463616 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 01:38:46.463622 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.463628 | orchestrator | + image_id = (known after apply) 2026-04-09 01:38:46.463633 | orchestrator | + image_name = (known after apply) 2026-04-09 01:38:46.463640 | orchestrator | + key_pair = "testbed" 2026-04-09 01:38:46.463646 | orchestrator | + name = "testbed-node-0" 2026-04-09 01:38:46.463653 | orchestrator | + power_state = "active" 2026-04-09 01:38:46.463659 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.463665 | orchestrator | + security_groups = (known after apply) 2026-04-09 01:38:46.463671 | orchestrator | + stop_before_destroy = false 2026-04-09 01:38:46.463677 | orchestrator | + updated = (known after apply) 2026-04-09 01:38:46.463683 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-09 01:38:46.463690 | orchestrator | 2026-04-09 01:38:46.463696 | orchestrator | + block_device { 2026-04-09 01:38:46.463702 | orchestrator | + boot_index = 0 2026-04-09 01:38:46.463714 | orchestrator | + delete_on_termination = false 2026-04-09 01:38:46.463721 | orchestrator | + destination_type = "volume" 2026-04-09 01:38:46.463728 | orchestrator | + multiattach = false 2026-04-09 01:38:46.463734 | orchestrator | + source_type = "volume" 2026-04-09 01:38:46.463740 | orchestrator | + uuid = (known after apply) 2026-04-09 01:38:46.463746 | orchestrator | } 2026-04-09 01:38:46.463752 | orchestrator | 2026-04-09 01:38:46.463759 | orchestrator | + network { 2026-04-09 01:38:46.463765 | orchestrator | + access_network = false 2026-04-09 01:38:46.463771 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 01:38:46.463778 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 01:38:46.463784 | orchestrator | + mac = (known after apply) 2026-04-09 01:38:46.463790 | orchestrator | + name = (known after apply) 2026-04-09 01:38:46.463796 | orchestrator | + port = (known after apply) 2026-04-09 01:38:46.463802 | orchestrator | + uuid = (known after apply) 2026-04-09 01:38:46.463807 | orchestrator | } 2026-04-09 01:38:46.463813 | orchestrator | } 2026-04-09 01:38:46.463820 | orchestrator | 2026-04-09 01:38:46.463827 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-09 01:38:46.463833 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-09 01:38:46.463839 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 01:38:46.463855 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 01:38:46.463861 | orchestrator | + all_metadata = (known after apply) 2026-04-09 01:38:46.463867 | orchestrator | + all_tags = (known after apply) 2026-04-09 01:38:46.463873 | orchestrator | + availability_zone = "nova" 2026-04-09 01:38:46.463879 | orchestrator | + config_drive = true 2026-04-09 01:38:46.463885 | orchestrator | + created = (known after apply) 2026-04-09 01:38:46.463891 | orchestrator | + flavor_id = (known after apply) 2026-04-09 01:38:46.463897 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-09 01:38:46.463903 | orchestrator | + force_delete = false 2026-04-09 01:38:46.463909 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 01:38:46.463915 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.463921 | orchestrator | + image_id = (known after apply) 2026-04-09 01:38:46.463927 | orchestrator | + image_name = (known after apply) 2026-04-09 01:38:46.463933 | orchestrator | + key_pair = "testbed" 2026-04-09 01:38:46.463940 | orchestrator | + name = "testbed-node-1" 2026-04-09 01:38:46.463946 | orchestrator | + power_state = "active" 2026-04-09 01:38:46.463953 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.463961 | orchestrator | + security_groups = (known after apply) 2026-04-09 01:38:46.463969 | orchestrator | + stop_before_destroy = false 2026-04-09 01:38:46.463975 | orchestrator | + updated = (known after apply) 2026-04-09 01:38:46.463982 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-09 01:38:46.463990 | orchestrator | 2026-04-09 01:38:46.463997 | orchestrator | + block_device { 2026-04-09 01:38:46.464004 | orchestrator | + boot_index = 0 2026-04-09 01:38:46.464011 | orchestrator | + delete_on_termination = false 2026-04-09 01:38:46.464017 | orchestrator | + destination_type = "volume" 2026-04-09 01:38:46.464023 | orchestrator | + multiattach = false 2026-04-09 01:38:46.464029 | orchestrator | + source_type = "volume" 2026-04-09 01:38:46.464035 | orchestrator | + uuid = (known after apply) 2026-04-09 01:38:46.464041 | orchestrator | } 2026-04-09 01:38:46.464048 | orchestrator | 2026-04-09 01:38:46.464054 | orchestrator | + network { 2026-04-09 01:38:46.464060 | orchestrator | + access_network = false 2026-04-09 01:38:46.464066 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 01:38:46.464072 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 01:38:46.464079 | orchestrator | + mac = (known after apply) 2026-04-09 01:38:46.464085 | orchestrator | + name = (known after apply) 2026-04-09 01:38:46.464091 | orchestrator | + port = (known after apply) 2026-04-09 01:38:46.464098 | orchestrator | + uuid = (known after apply) 2026-04-09 01:38:46.464104 | orchestrator | } 2026-04-09 01:38:46.464111 | orchestrator | } 2026-04-09 01:38:46.464118 | orchestrator | 2026-04-09 01:38:46.464125 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-09 01:38:46.464131 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-09 01:38:46.464137 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 01:38:46.464143 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 01:38:46.464151 | orchestrator | + all_metadata = (known after apply) 2026-04-09 01:38:46.464158 | orchestrator | + all_tags = (known after apply) 2026-04-09 01:38:46.464173 | orchestrator | + availability_zone = "nova" 2026-04-09 01:38:46.464180 | orchestrator | + config_drive = true 2026-04-09 01:38:46.464186 | orchestrator | + created = (known after apply) 2026-04-09 01:38:46.464193 | orchestrator | + flavor_id = (known after apply) 2026-04-09 01:38:46.464199 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-09 01:38:46.464322 | orchestrator | + force_delete = false 2026-04-09 01:38:46.464336 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 01:38:46.464341 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.464345 | orchestrator | + image_id = (known after apply) 2026-04-09 01:38:46.464357 | orchestrator | + image_name = (known after apply) 2026-04-09 01:38:46.464361 | orchestrator | + key_pair = "testbed" 2026-04-09 01:38:46.464365 | orchestrator | + name = "testbed-node-2" 2026-04-09 01:38:46.464369 | orchestrator | + power_state = "active" 2026-04-09 01:38:46.464373 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.464377 | orchestrator | + security_groups = (known after apply) 2026-04-09 01:38:46.464381 | orchestrator | + stop_before_destroy = false 2026-04-09 01:38:46.464386 | orchestrator | + updated = (known after apply) 2026-04-09 01:38:46.464390 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-09 01:38:46.464394 | orchestrator | 2026-04-09 01:38:46.464399 | orchestrator | + block_device { 2026-04-09 01:38:46.464403 | orchestrator | + boot_index = 0 2026-04-09 01:38:46.464407 | orchestrator | + delete_on_termination = false 2026-04-09 01:38:46.464411 | orchestrator | + destination_type = "volume" 2026-04-09 01:38:46.464415 | orchestrator | + multiattach = false 2026-04-09 01:38:46.464419 | orchestrator | + source_type = "volume" 2026-04-09 01:38:46.464424 | orchestrator | + uuid = (known after apply) 2026-04-09 01:38:46.464428 | orchestrator | } 2026-04-09 01:38:46.464432 | orchestrator | 2026-04-09 01:38:46.464436 | orchestrator | + network { 2026-04-09 01:38:46.464441 | orchestrator | + access_network = false 2026-04-09 01:38:46.464445 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 01:38:46.464449 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 01:38:46.464453 | orchestrator | + mac = (known after apply) 2026-04-09 01:38:46.464466 | orchestrator | + name = (known after apply) 2026-04-09 01:38:46.464471 | orchestrator | + port = (known after apply) 2026-04-09 01:38:46.464475 | orchestrator | + uuid = (known after apply) 2026-04-09 01:38:46.464479 | orchestrator | } 2026-04-09 01:38:46.464483 | orchestrator | } 2026-04-09 01:38:46.464495 | orchestrator | 2026-04-09 01:38:46.464500 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-09 01:38:46.464505 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-09 01:38:46.464509 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 01:38:46.464513 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 01:38:46.464517 | orchestrator | + all_metadata = (known after apply) 2026-04-09 01:38:46.464521 | orchestrator | + all_tags = (known after apply) 2026-04-09 01:38:46.464525 | orchestrator | + availability_zone = "nova" 2026-04-09 01:38:46.464530 | orchestrator | + config_drive = true 2026-04-09 01:38:46.464534 | orchestrator | + created = (known after apply) 2026-04-09 01:38:46.464538 | orchestrator | + flavor_id = (known after apply) 2026-04-09 01:38:46.464543 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-09 01:38:46.464547 | orchestrator | + force_delete = false 2026-04-09 01:38:46.464551 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 01:38:46.464555 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.464559 | orchestrator | + image_id = (known after apply) 2026-04-09 01:38:46.464564 | orchestrator | + image_name = (known after apply) 2026-04-09 01:38:46.464568 | orchestrator | + key_pair = "testbed" 2026-04-09 01:38:46.464572 | orchestrator | + name = "testbed-node-3" 2026-04-09 01:38:46.464576 | orchestrator | + power_state = "active" 2026-04-09 01:38:46.464580 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.464585 | orchestrator | + security_groups = (known after apply) 2026-04-09 01:38:46.464589 | orchestrator | + stop_before_destroy = false 2026-04-09 01:38:46.464593 | orchestrator | + updated = (known after apply) 2026-04-09 01:38:46.464597 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-09 01:38:46.464602 | orchestrator | 2026-04-09 01:38:46.464606 | orchestrator | + block_device { 2026-04-09 01:38:46.464615 | orchestrator | + boot_index = 0 2026-04-09 01:38:46.464619 | orchestrator | + delete_on_termination = false 2026-04-09 01:38:46.464623 | orchestrator | + destination_type = "volume" 2026-04-09 01:38:46.464631 | orchestrator | + multiattach = false 2026-04-09 01:38:46.464636 | orchestrator | + source_type = "volume" 2026-04-09 01:38:46.464640 | orchestrator | + uuid = (known after apply) 2026-04-09 01:38:46.464644 | orchestrator | } 2026-04-09 01:38:46.464648 | orchestrator | 2026-04-09 01:38:46.464653 | orchestrator | + network { 2026-04-09 01:38:46.464657 | orchestrator | + access_network = false 2026-04-09 01:38:46.464661 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 01:38:46.464665 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 01:38:46.464669 | orchestrator | + mac = (known after apply) 2026-04-09 01:38:46.464674 | orchestrator | + name = (known after apply) 2026-04-09 01:38:46.464678 | orchestrator | + port = (known after apply) 2026-04-09 01:38:46.464682 | orchestrator | + uuid = (known after apply) 2026-04-09 01:38:46.464686 | orchestrator | } 2026-04-09 01:38:46.464690 | orchestrator | } 2026-04-09 01:38:46.464694 | orchestrator | 2026-04-09 01:38:46.464699 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-09 01:38:46.464703 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-09 01:38:46.464707 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 01:38:46.464711 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 01:38:46.464716 | orchestrator | + all_metadata = (known after apply) 2026-04-09 01:38:46.464720 | orchestrator | + all_tags = (known after apply) 2026-04-09 01:38:46.464724 | orchestrator | + availability_zone = "nova" 2026-04-09 01:38:46.464728 | orchestrator | + config_drive = true 2026-04-09 01:38:46.464732 | orchestrator | + created = (known after apply) 2026-04-09 01:38:46.464737 | orchestrator | + flavor_id = (known after apply) 2026-04-09 01:38:46.464741 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-09 01:38:46.464745 | orchestrator | + force_delete = false 2026-04-09 01:38:46.464749 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 01:38:46.464753 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.464757 | orchestrator | + image_id = (known after apply) 2026-04-09 01:38:46.464761 | orchestrator | + image_name = (known after apply) 2026-04-09 01:38:46.464766 | orchestrator | + key_pair = "testbed" 2026-04-09 01:38:46.464770 | orchestrator | + name = "testbed-node-4" 2026-04-09 01:38:46.464774 | orchestrator | + power_state = "active" 2026-04-09 01:38:46.464778 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.464782 | orchestrator | + security_groups = (known after apply) 2026-04-09 01:38:46.464786 | orchestrator | + stop_before_destroy = false 2026-04-09 01:38:46.464790 | orchestrator | + updated = (known after apply) 2026-04-09 01:38:46.464794 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-09 01:38:46.464798 | orchestrator | 2026-04-09 01:38:46.464802 | orchestrator | + block_device { 2026-04-09 01:38:46.464805 | orchestrator | + boot_index = 0 2026-04-09 01:38:46.464809 | orchestrator | + delete_on_termination = false 2026-04-09 01:38:46.464813 | orchestrator | + destination_type = "volume" 2026-04-09 01:38:46.464817 | orchestrator | + multiattach = false 2026-04-09 01:38:46.464821 | orchestrator | + source_type = "volume" 2026-04-09 01:38:46.464824 | orchestrator | + uuid = (known after apply) 2026-04-09 01:38:46.464828 | orchestrator | } 2026-04-09 01:38:46.464832 | orchestrator | 2026-04-09 01:38:46.464836 | orchestrator | + network { 2026-04-09 01:38:46.464840 | orchestrator | + access_network = false 2026-04-09 01:38:46.464844 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 01:38:46.464848 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 01:38:46.464851 | orchestrator | + mac = (known after apply) 2026-04-09 01:38:46.464855 | orchestrator | + name = (known after apply) 2026-04-09 01:38:46.464859 | orchestrator | + port = (known after apply) 2026-04-09 01:38:46.464863 | orchestrator | + uuid = (known after apply) 2026-04-09 01:38:46.464867 | orchestrator | } 2026-04-09 01:38:46.464871 | orchestrator | } 2026-04-09 01:38:46.464878 | orchestrator | 2026-04-09 01:38:46.464882 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-09 01:38:46.464886 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-09 01:38:46.464890 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 01:38:46.464894 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 01:38:46.464898 | orchestrator | + all_metadata = (known after apply) 2026-04-09 01:38:46.464904 | orchestrator | + all_tags = (known after apply) 2026-04-09 01:38:46.464908 | orchestrator | + availability_zone = "nova" 2026-04-09 01:38:46.464912 | orchestrator | + config_drive = true 2026-04-09 01:38:46.464916 | orchestrator | + created = (known after apply) 2026-04-09 01:38:46.464920 | orchestrator | + flavor_id = (known after apply) 2026-04-09 01:38:46.464923 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-09 01:38:46.464927 | orchestrator | + force_delete = false 2026-04-09 01:38:46.464934 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 01:38:46.464938 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.464942 | orchestrator | + image_id = (known after apply) 2026-04-09 01:38:46.464946 | orchestrator | + image_name = (known after apply) 2026-04-09 01:38:46.464949 | orchestrator | + key_pair = "testbed" 2026-04-09 01:38:46.464953 | orchestrator | + name = "testbed-node-5" 2026-04-09 01:38:46.464957 | orchestrator | + power_state = "active" 2026-04-09 01:38:46.464961 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.464965 | orchestrator | + security_groups = (known after apply) 2026-04-09 01:38:46.464968 | orchestrator | + stop_before_destroy = false 2026-04-09 01:38:46.464972 | orchestrator | + updated = (known after apply) 2026-04-09 01:38:46.464976 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-09 01:38:46.464980 | orchestrator | 2026-04-09 01:38:46.464984 | orchestrator | + block_device { 2026-04-09 01:38:46.464987 | orchestrator | + boot_index = 0 2026-04-09 01:38:46.464991 | orchestrator | + delete_on_termination = false 2026-04-09 01:38:46.464995 | orchestrator | + destination_type = "volume" 2026-04-09 01:38:46.464999 | orchestrator | + multiattach = false 2026-04-09 01:38:46.465003 | orchestrator | + source_type = "volume" 2026-04-09 01:38:46.465007 | orchestrator | + uuid = (known after apply) 2026-04-09 01:38:46.465010 | orchestrator | } 2026-04-09 01:38:46.465014 | orchestrator | 2026-04-09 01:38:46.465018 | orchestrator | + network { 2026-04-09 01:38:46.465022 | orchestrator | + access_network = false 2026-04-09 01:38:46.465026 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 01:38:46.465030 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 01:38:46.465033 | orchestrator | + mac = (known after apply) 2026-04-09 01:38:46.465037 | orchestrator | + name = (known after apply) 2026-04-09 01:38:46.465041 | orchestrator | + port = (known after apply) 2026-04-09 01:38:46.465045 | orchestrator | + uuid = (known after apply) 2026-04-09 01:38:46.465049 | orchestrator | } 2026-04-09 01:38:46.465053 | orchestrator | } 2026-04-09 01:38:46.465057 | orchestrator | 2026-04-09 01:38:46.465061 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-09 01:38:46.465064 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-09 01:38:46.465068 | orchestrator | + fingerprint = (known after apply) 2026-04-09 01:38:46.465072 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.465076 | orchestrator | + name = "testbed" 2026-04-09 01:38:46.465080 | orchestrator | + private_key = (sensitive value) 2026-04-09 01:38:46.465084 | orchestrator | + public_key = (known after apply) 2026-04-09 01:38:46.465087 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.465091 | orchestrator | + user_id = (known after apply) 2026-04-09 01:38:46.465095 | orchestrator | } 2026-04-09 01:38:46.465099 | orchestrator | 2026-04-09 01:38:46.465103 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-09 01:38:46.465107 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 01:38:46.465114 | orchestrator | + device = (known after apply) 2026-04-09 01:38:46.465118 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.465122 | orchestrator | + instance_id = (known after apply) 2026-04-09 01:38:46.465126 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.465130 | orchestrator | + volume_id = (known after apply) 2026-04-09 01:38:46.465134 | orchestrator | } 2026-04-09 01:38:46.465137 | orchestrator | 2026-04-09 01:38:46.465141 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-09 01:38:46.465145 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 01:38:46.465149 | orchestrator | + device = (known after apply) 2026-04-09 01:38:46.465153 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.465157 | orchestrator | + instance_id = (known after apply) 2026-04-09 01:38:46.465161 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.465164 | orchestrator | + volume_id = (known after apply) 2026-04-09 01:38:46.465168 | orchestrator | } 2026-04-09 01:38:46.465172 | orchestrator | 2026-04-09 01:38:46.465176 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-09 01:38:46.465180 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 01:38:46.465184 | orchestrator | + device = (known after apply) 2026-04-09 01:38:46.465188 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.465192 | orchestrator | + instance_id = (known after apply) 2026-04-09 01:38:46.465195 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.465199 | orchestrator | + volume_id = (known after apply) 2026-04-09 01:38:46.465223 | orchestrator | } 2026-04-09 01:38:46.465228 | orchestrator | 2026-04-09 01:38:46.465232 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-09 01:38:46.465236 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 01:38:46.465239 | orchestrator | + device = (known after apply) 2026-04-09 01:38:46.465243 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.465247 | orchestrator | + instance_id = (known after apply) 2026-04-09 01:38:46.465251 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.465255 | orchestrator | + volume_id = (known after apply) 2026-04-09 01:38:46.465258 | orchestrator | } 2026-04-09 01:38:46.465262 | orchestrator | 2026-04-09 01:38:46.465266 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-09 01:38:46.465270 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 01:38:46.465274 | orchestrator | + device = (known after apply) 2026-04-09 01:38:46.465278 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.465282 | orchestrator | + instance_id = (known after apply) 2026-04-09 01:38:46.465289 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.465293 | orchestrator | + volume_id = (known after apply) 2026-04-09 01:38:46.465296 | orchestrator | } 2026-04-09 01:38:46.465300 | orchestrator | 2026-04-09 01:38:46.465304 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-09 01:38:46.465308 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 01:38:46.465312 | orchestrator | + device = (known after apply) 2026-04-09 01:38:46.465316 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.465320 | orchestrator | + instance_id = (known after apply) 2026-04-09 01:38:46.465326 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.465330 | orchestrator | + volume_id = (known after apply) 2026-04-09 01:38:46.465334 | orchestrator | } 2026-04-09 01:38:46.465338 | orchestrator | 2026-04-09 01:38:46.465342 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-09 01:38:46.465346 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 01:38:46.465350 | orchestrator | + device = (known after apply) 2026-04-09 01:38:46.465354 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.465358 | orchestrator | + instance_id = (known after apply) 2026-04-09 01:38:46.465362 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.465369 | orchestrator | + volume_id = (known after apply) 2026-04-09 01:38:46.465373 | orchestrator | } 2026-04-09 01:38:46.465376 | orchestrator | 2026-04-09 01:38:46.465380 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-09 01:38:46.465385 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 01:38:46.465389 | orchestrator | + device = (known after apply) 2026-04-09 01:38:46.465392 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.465396 | orchestrator | + instance_id = (known after apply) 2026-04-09 01:38:46.465400 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.465404 | orchestrator | + volume_id = (known after apply) 2026-04-09 01:38:46.465408 | orchestrator | } 2026-04-09 01:38:46.465412 | orchestrator | 2026-04-09 01:38:46.465415 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-09 01:38:46.465419 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 01:38:46.465423 | orchestrator | + device = (known after apply) 2026-04-09 01:38:46.465427 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.465431 | orchestrator | + instance_id = (known after apply) 2026-04-09 01:38:46.465435 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.465438 | orchestrator | + volume_id = (known after apply) 2026-04-09 01:38:46.465442 | orchestrator | } 2026-04-09 01:38:46.465446 | orchestrator | 2026-04-09 01:38:46.465450 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-09 01:38:46.465455 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-09 01:38:46.465459 | orchestrator | + fixed_ip = (known after apply) 2026-04-09 01:38:46.465462 | orchestrator | + floating_ip = (known after apply) 2026-04-09 01:38:46.465466 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.465470 | orchestrator | + port_id = (known after apply) 2026-04-09 01:38:46.465474 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.465477 | orchestrator | } 2026-04-09 01:38:46.465481 | orchestrator | 2026-04-09 01:38:46.465485 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-09 01:38:46.465489 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-09 01:38:46.465493 | orchestrator | + address = (known after apply) 2026-04-09 01:38:46.465497 | orchestrator | + all_tags = (known after apply) 2026-04-09 01:38:46.465501 | orchestrator | + dns_domain = (known after apply) 2026-04-09 01:38:46.465504 | orchestrator | + dns_name = (known after apply) 2026-04-09 01:38:46.465508 | orchestrator | + fixed_ip = (known after apply) 2026-04-09 01:38:46.465512 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.465516 | orchestrator | + pool = "public" 2026-04-09 01:38:46.465519 | orchestrator | + port_id = (known after apply) 2026-04-09 01:38:46.465523 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.465527 | orchestrator | + subnet_id = (known after apply) 2026-04-09 01:38:46.465531 | orchestrator | + tenant_id = (known after apply) 2026-04-09 01:38:46.465535 | orchestrator | } 2026-04-09 01:38:46.465538 | orchestrator | 2026-04-09 01:38:46.465542 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-09 01:38:46.465546 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-09 01:38:46.465550 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 01:38:46.465554 | orchestrator | + all_tags = (known after apply) 2026-04-09 01:38:46.465557 | orchestrator | + availability_zone_hints = [ 2026-04-09 01:38:46.465561 | orchestrator | + "nova", 2026-04-09 01:38:46.465565 | orchestrator | ] 2026-04-09 01:38:46.465569 | orchestrator | + dns_domain = (known after apply) 2026-04-09 01:38:46.465573 | orchestrator | + external = (known after apply) 2026-04-09 01:38:46.465577 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.465580 | orchestrator | + mtu = (known after apply) 2026-04-09 01:38:46.465584 | orchestrator | + name = "net-testbed-management" 2026-04-09 01:38:46.465588 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 01:38:46.465599 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 01:38:46.465603 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.465607 | orchestrator | + shared = (known after apply) 2026-04-09 01:38:46.465610 | orchestrator | + tenant_id = (known after apply) 2026-04-09 01:38:46.465614 | orchestrator | + transparent_vlan = (known after apply) 2026-04-09 01:38:46.465618 | orchestrator | 2026-04-09 01:38:46.465622 | orchestrator | + segments (known after apply) 2026-04-09 01:38:46.465626 | orchestrator | } 2026-04-09 01:38:46.465630 | orchestrator | 2026-04-09 01:38:46.465634 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-09 01:38:46.465638 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-09 01:38:46.465642 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 01:38:46.465646 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 01:38:46.465649 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 01:38:46.465656 | orchestrator | + all_tags = (known after apply) 2026-04-09 01:38:46.465660 | orchestrator | + device_id = (known after apply) 2026-04-09 01:38:46.465664 | orchestrator | + device_owner = (known after apply) 2026-04-09 01:38:46.465667 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 01:38:46.465671 | orchestrator | + dns_name = (known after apply) 2026-04-09 01:38:46.465675 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.465679 | orchestrator | + mac_address = (known after apply) 2026-04-09 01:38:46.465683 | orchestrator | + network_id = (known after apply) 2026-04-09 01:38:46.465686 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 01:38:46.465690 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 01:38:46.465694 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.465705 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 01:38:46.465709 | orchestrator | + tenant_id = (known after apply) 2026-04-09 01:38:46.465713 | orchestrator | 2026-04-09 01:38:46.465717 | orchestrator | + allowed_address_pairs { 2026-04-09 01:38:46.465721 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 01:38:46.465725 | orchestrator | } 2026-04-09 01:38:46.465729 | orchestrator | 2026-04-09 01:38:46.465732 | orchestrator | + binding (known after apply) 2026-04-09 01:38:46.465736 | orchestrator | 2026-04-09 01:38:46.465740 | orchestrator | + fixed_ip { 2026-04-09 01:38:46.465744 | orchestrator | + ip_address = "192.168.16.5" 2026-04-09 01:38:46.465748 | orchestrator | + subnet_id = (known after apply) 2026-04-09 01:38:46.465752 | orchestrator | } 2026-04-09 01:38:46.465755 | orchestrator | } 2026-04-09 01:38:46.465759 | orchestrator | 2026-04-09 01:38:46.465763 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-09 01:38:46.465767 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-09 01:38:46.465771 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 01:38:46.465774 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 01:38:46.465778 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 01:38:46.465782 | orchestrator | + all_tags = (known after apply) 2026-04-09 01:38:46.465786 | orchestrator | + device_id = (known after apply) 2026-04-09 01:38:46.465790 | orchestrator | + device_owner = (known after apply) 2026-04-09 01:38:46.465793 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 01:38:46.465797 | orchestrator | + dns_name = (known after apply) 2026-04-09 01:38:46.465802 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.465805 | orchestrator | + mac_address = (known after apply) 2026-04-09 01:38:46.465809 | orchestrator | + network_id = (known after apply) 2026-04-09 01:38:46.465813 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 01:38:46.465817 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 01:38:46.465820 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.465828 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 01:38:46.465832 | orchestrator | + tenant_id = (known after apply) 2026-04-09 01:38:46.465836 | orchestrator | 2026-04-09 01:38:46.465840 | orchestrator | + allowed_address_pairs { 2026-04-09 01:38:46.465844 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-09 01:38:46.465847 | orchestrator | } 2026-04-09 01:38:46.465851 | orchestrator | + allowed_address_pairs { 2026-04-09 01:38:46.465855 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 01:38:46.465859 | orchestrator | } 2026-04-09 01:38:46.465863 | orchestrator | + allowed_address_pairs { 2026-04-09 01:38:46.465867 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-09 01:38:46.465871 | orchestrator | } 2026-04-09 01:38:46.465875 | orchestrator | 2026-04-09 01:38:46.465879 | orchestrator | + binding (known after apply) 2026-04-09 01:38:46.465883 | orchestrator | 2026-04-09 01:38:46.465886 | orchestrator | + fixed_ip { 2026-04-09 01:38:46.465890 | orchestrator | + ip_address = "192.168.16.10" 2026-04-09 01:38:46.465894 | orchestrator | + subnet_id = (known after apply) 2026-04-09 01:38:46.465898 | orchestrator | } 2026-04-09 01:38:46.465902 | orchestrator | } 2026-04-09 01:38:46.465906 | orchestrator | 2026-04-09 01:38:46.465909 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-09 01:38:46.465913 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-09 01:38:46.465917 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 01:38:46.465921 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 01:38:46.465925 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 01:38:46.465929 | orchestrator | + all_tags = (known after apply) 2026-04-09 01:38:46.465932 | orchestrator | + device_id = (known after apply) 2026-04-09 01:38:46.465936 | orchestrator | + device_owner = (known after apply) 2026-04-09 01:38:46.465940 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 01:38:46.465944 | orchestrator | + dns_name = (known after apply) 2026-04-09 01:38:46.465948 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.465952 | orchestrator | + mac_address = (known after apply) 2026-04-09 01:38:46.465956 | orchestrator | + network_id = (known after apply) 2026-04-09 01:38:46.465959 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 01:38:46.465963 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 01:38:46.465967 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.465971 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 01:38:46.465975 | orchestrator | + tenant_id = (known after apply) 2026-04-09 01:38:46.465979 | orchestrator | 2026-04-09 01:38:46.465982 | orchestrator | + allowed_address_pairs { 2026-04-09 01:38:46.465986 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-09 01:38:46.465990 | orchestrator | } 2026-04-09 01:38:46.465994 | orchestrator | + allowed_address_pairs { 2026-04-09 01:38:46.465998 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 01:38:46.466002 | orchestrator | } 2026-04-09 01:38:46.466005 | orchestrator | + allowed_address_pairs { 2026-04-09 01:38:46.466009 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-09 01:38:46.466032 | orchestrator | } 2026-04-09 01:38:46.466041 | orchestrator | 2026-04-09 01:38:46.466046 | orchestrator | + binding (known after apply) 2026-04-09 01:38:46.466053 | orchestrator | 2026-04-09 01:38:46.466062 | orchestrator | + fixed_ip { 2026-04-09 01:38:46.466069 | orchestrator | + ip_address = "192.168.16.11" 2026-04-09 01:38:46.466075 | orchestrator | + subnet_id = (known after apply) 2026-04-09 01:38:46.466081 | orchestrator | } 2026-04-09 01:38:46.466087 | orchestrator | } 2026-04-09 01:38:46.466093 | orchestrator | 2026-04-09 01:38:46.466098 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-09 01:38:46.466104 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-09 01:38:46.466110 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 01:38:46.466115 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 01:38:46.466121 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 01:38:46.466127 | orchestrator | + all_tags = (known after apply) 2026-04-09 01:38:46.466139 | orchestrator | + device_id = (known after apply) 2026-04-09 01:38:46.466145 | orchestrator | + device_owner = (known after apply) 2026-04-09 01:38:46.466150 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 01:38:46.466156 | orchestrator | + dns_name = (known after apply) 2026-04-09 01:38:46.466166 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.466172 | orchestrator | + mac_address = (known after apply) 2026-04-09 01:38:46.466178 | orchestrator | + network_id = (known after apply) 2026-04-09 01:38:46.466184 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 01:38:46.466190 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 01:38:46.466196 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.466202 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 01:38:46.466259 | orchestrator | + tenant_id = (known after apply) 2026-04-09 01:38:46.466263 | orchestrator | 2026-04-09 01:38:46.466267 | orchestrator | + allowed_address_pairs { 2026-04-09 01:38:46.466271 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-09 01:38:46.466274 | orchestrator | } 2026-04-09 01:38:46.466278 | orchestrator | + allowed_address_pairs { 2026-04-09 01:38:46.466282 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 01:38:46.466286 | orchestrator | } 2026-04-09 01:38:46.466290 | orchestrator | + allowed_address_pairs { 2026-04-09 01:38:46.466293 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-09 01:38:46.466297 | orchestrator | } 2026-04-09 01:38:46.466301 | orchestrator | 2026-04-09 01:38:46.466305 | orchestrator | + binding (known after apply) 2026-04-09 01:38:46.466309 | orchestrator | 2026-04-09 01:38:46.466313 | orchestrator | + fixed_ip { 2026-04-09 01:38:46.466317 | orchestrator | + ip_address = "192.168.16.12" 2026-04-09 01:38:46.466321 | orchestrator | + subnet_id = (known after apply) 2026-04-09 01:38:46.466325 | orchestrator | } 2026-04-09 01:38:46.466328 | orchestrator | } 2026-04-09 01:38:46.466332 | orchestrator | 2026-04-09 01:38:46.466336 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-09 01:38:46.466340 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-09 01:38:46.466344 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 01:38:46.466348 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 01:38:46.466352 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 01:38:46.466356 | orchestrator | + all_tags = (known after apply) 2026-04-09 01:38:46.466360 | orchestrator | + device_id = (known after apply) 2026-04-09 01:38:46.466363 | orchestrator | + device_owner = (known after apply) 2026-04-09 01:38:46.466367 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 01:38:46.466371 | orchestrator | + dns_name = (known after apply) 2026-04-09 01:38:46.466375 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.466379 | orchestrator | + mac_address = (known after apply) 2026-04-09 01:38:46.466382 | orchestrator | + network_id = (known after apply) 2026-04-09 01:38:46.466386 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 01:38:46.466390 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 01:38:46.466394 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.466398 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 01:38:46.466403 | orchestrator | + tenant_id = (known after apply) 2026-04-09 01:38:46.466409 | orchestrator | 2026-04-09 01:38:46.466415 | orchestrator | + allowed_address_pairs { 2026-04-09 01:38:46.466421 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-09 01:38:46.466427 | orchestrator | } 2026-04-09 01:38:46.466433 | orchestrator | + allowed_address_pairs { 2026-04-09 01:38:46.466439 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 01:38:46.466444 | orchestrator | } 2026-04-09 01:38:46.466451 | orchestrator | + allowed_address_pairs { 2026-04-09 01:38:46.466457 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-09 01:38:46.466463 | orchestrator | } 2026-04-09 01:38:46.466469 | orchestrator | 2026-04-09 01:38:46.466481 | orchestrator | + binding (known after apply) 2026-04-09 01:38:46.466486 | orchestrator | 2026-04-09 01:38:46.466490 | orchestrator | + fixed_ip { 2026-04-09 01:38:46.466493 | orchestrator | + ip_address = "192.168.16.13" 2026-04-09 01:38:46.466497 | orchestrator | + subnet_id = (known after apply) 2026-04-09 01:38:46.466501 | orchestrator | } 2026-04-09 01:38:46.466505 | orchestrator | } 2026-04-09 01:38:46.466509 | orchestrator | 2026-04-09 01:38:46.466513 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-09 01:38:46.466517 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-09 01:38:46.466520 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 01:38:46.466524 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 01:38:46.466528 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 01:38:46.466532 | orchestrator | + all_tags = (known after apply) 2026-04-09 01:38:46.466536 | orchestrator | + device_id = (known after apply) 2026-04-09 01:38:46.466540 | orchestrator | + device_owner = (known after apply) 2026-04-09 01:38:46.466544 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 01:38:46.466548 | orchestrator | + dns_name = (known after apply) 2026-04-09 01:38:46.466551 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.466555 | orchestrator | + mac_address = (known after apply) 2026-04-09 01:38:46.466559 | orchestrator | + network_id = (known after apply) 2026-04-09 01:38:46.466563 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 01:38:46.466567 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 01:38:46.466571 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.466575 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 01:38:46.466579 | orchestrator | + tenant_id = (known after apply) 2026-04-09 01:38:46.466584 | orchestrator | 2026-04-09 01:38:46.466588 | orchestrator | + allowed_address_pairs { 2026-04-09 01:38:46.466592 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-09 01:38:46.466596 | orchestrator | } 2026-04-09 01:38:46.466600 | orchestrator | + allowed_address_pairs { 2026-04-09 01:38:46.466604 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 01:38:46.466608 | orchestrator | } 2026-04-09 01:38:46.466611 | orchestrator | + allowed_address_pairs { 2026-04-09 01:38:46.466615 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-09 01:38:46.466619 | orchestrator | } 2026-04-09 01:38:46.466623 | orchestrator | 2026-04-09 01:38:46.466627 | orchestrator | + binding (known after apply) 2026-04-09 01:38:46.466631 | orchestrator | 2026-04-09 01:38:46.466635 | orchestrator | + fixed_ip { 2026-04-09 01:38:46.466639 | orchestrator | + ip_address = "192.168.16.14" 2026-04-09 01:38:46.466643 | orchestrator | + subnet_id = (known after apply) 2026-04-09 01:38:46.466651 | orchestrator | } 2026-04-09 01:38:46.466655 | orchestrator | } 2026-04-09 01:38:46.466659 | orchestrator | 2026-04-09 01:38:46.466663 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-09 01:38:46.466667 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-09 01:38:46.466670 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 01:38:46.466674 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 01:38:46.466678 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 01:38:46.466682 | orchestrator | + all_tags = (known after apply) 2026-04-09 01:38:46.466686 | orchestrator | + device_id = (known after apply) 2026-04-09 01:38:46.466690 | orchestrator | + device_owner = (known after apply) 2026-04-09 01:38:46.466694 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 01:38:46.466698 | orchestrator | + dns_name = (known after apply) 2026-04-09 01:38:46.466701 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.466705 | orchestrator | + mac_address = (known after apply) 2026-04-09 01:38:46.466709 | orchestrator | + network_id = (known after apply) 2026-04-09 01:38:46.466713 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 01:38:46.466717 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 01:38:46.466727 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.466731 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 01:38:46.466735 | orchestrator | + tenant_id = (known after apply) 2026-04-09 01:38:46.466739 | orchestrator | 2026-04-09 01:38:46.466743 | orchestrator | + allowed_address_pairs { 2026-04-09 01:38:46.466747 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-09 01:38:46.466751 | orchestrator | } 2026-04-09 01:38:46.466755 | orchestrator | + allowed_address_pairs { 2026-04-09 01:38:46.466759 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 01:38:46.466762 | orchestrator | } 2026-04-09 01:38:46.466766 | orchestrator | + allowed_address_pairs { 2026-04-09 01:38:46.466770 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-09 01:38:46.466774 | orchestrator | } 2026-04-09 01:38:46.466778 | orchestrator | 2026-04-09 01:38:46.466785 | orchestrator | + binding (known after apply) 2026-04-09 01:38:46.466789 | orchestrator | 2026-04-09 01:38:46.466793 | orchestrator | + fixed_ip { 2026-04-09 01:38:46.466797 | orchestrator | + ip_address = "192.168.16.15" 2026-04-09 01:38:46.466801 | orchestrator | + subnet_id = (known after apply) 2026-04-09 01:38:46.466805 | orchestrator | } 2026-04-09 01:38:46.466809 | orchestrator | } 2026-04-09 01:38:46.466812 | orchestrator | 2026-04-09 01:38:46.466816 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-09 01:38:46.466864 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-09 01:38:46.466868 | orchestrator | + force_destroy = false 2026-04-09 01:38:46.466872 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.466876 | orchestrator | + port_id = (known after apply) 2026-04-09 01:38:46.466880 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.466884 | orchestrator | + router_id = (known after apply) 2026-04-09 01:38:46.466888 | orchestrator | + subnet_id = (known after apply) 2026-04-09 01:38:46.466891 | orchestrator | } 2026-04-09 01:38:46.466895 | orchestrator | 2026-04-09 01:38:46.466899 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-09 01:38:46.466903 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-09 01:38:46.466907 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 01:38:46.466911 | orchestrator | + all_tags = (known after apply) 2026-04-09 01:38:46.466914 | orchestrator | + availability_zone_hints = [ 2026-04-09 01:38:46.466918 | orchestrator | + "nova", 2026-04-09 01:38:46.466922 | orchestrator | ] 2026-04-09 01:38:46.466926 | orchestrator | + distributed = (known after apply) 2026-04-09 01:38:46.466930 | orchestrator | + enable_snat = (known after apply) 2026-04-09 01:38:46.466934 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-09 01:38:46.466938 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-09 01:38:46.466941 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.466945 | orchestrator | + name = "testbed" 2026-04-09 01:38:46.466949 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.466953 | orchestrator | + tenant_id = (known after apply) 2026-04-09 01:38:46.466957 | orchestrator | 2026-04-09 01:38:46.466961 | orchestrator | + external_fixed_ip (known after apply) 2026-04-09 01:38:46.466965 | orchestrator | } 2026-04-09 01:38:46.466969 | orchestrator | 2026-04-09 01:38:46.466972 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-09 01:38:46.466977 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-09 01:38:46.466981 | orchestrator | + description = "ssh" 2026-04-09 01:38:46.466985 | orchestrator | + direction = "ingress" 2026-04-09 01:38:46.466989 | orchestrator | + ethertype = "IPv4" 2026-04-09 01:38:46.466993 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.466997 | orchestrator | + port_range_max = 22 2026-04-09 01:38:46.467000 | orchestrator | + port_range_min = 22 2026-04-09 01:38:46.467004 | orchestrator | + protocol = "tcp" 2026-04-09 01:38:46.467008 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.467018 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 01:38:46.467022 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 01:38:46.467026 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 01:38:46.467030 | orchestrator | + security_group_id = (known after apply) 2026-04-09 01:38:46.467034 | orchestrator | + tenant_id = (known after apply) 2026-04-09 01:38:46.467038 | orchestrator | } 2026-04-09 01:38:46.467042 | orchestrator | 2026-04-09 01:38:46.467045 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-09 01:38:46.467049 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-09 01:38:46.467053 | orchestrator | + description = "wireguard" 2026-04-09 01:38:46.467057 | orchestrator | + direction = "ingress" 2026-04-09 01:38:46.467061 | orchestrator | + ethertype = "IPv4" 2026-04-09 01:38:46.467065 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.467068 | orchestrator | + port_range_max = 51820 2026-04-09 01:38:46.467072 | orchestrator | + port_range_min = 51820 2026-04-09 01:38:46.467076 | orchestrator | + protocol = "udp" 2026-04-09 01:38:46.467080 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.467084 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 01:38:46.467088 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 01:38:46.467091 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 01:38:46.467095 | orchestrator | + security_group_id = (known after apply) 2026-04-09 01:38:46.467100 | orchestrator | + tenant_id = (known after apply) 2026-04-09 01:38:46.467107 | orchestrator | } 2026-04-09 01:38:46.467113 | orchestrator | 2026-04-09 01:38:46.467118 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-09 01:38:46.467124 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-09 01:38:46.467129 | orchestrator | + direction = "ingress" 2026-04-09 01:38:46.467135 | orchestrator | + ethertype = "IPv4" 2026-04-09 01:38:46.467140 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.467146 | orchestrator | + protocol = "tcp" 2026-04-09 01:38:46.467151 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.467157 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 01:38:46.467162 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 01:38:46.467168 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-09 01:38:46.467173 | orchestrator | + security_group_id = (known after apply) 2026-04-09 01:38:46.467179 | orchestrator | + tenant_id = (known after apply) 2026-04-09 01:38:46.467185 | orchestrator | } 2026-04-09 01:38:46.467191 | orchestrator | 2026-04-09 01:38:46.467202 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-09 01:38:46.467226 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-09 01:38:46.467231 | orchestrator | + direction = "ingress" 2026-04-09 01:38:46.467237 | orchestrator | + ethertype = "IPv4" 2026-04-09 01:38:46.467243 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.467249 | orchestrator | + protocol = "udp" 2026-04-09 01:38:46.467254 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.467260 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 01:38:46.467267 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 01:38:46.467273 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-09 01:38:46.467279 | orchestrator | + security_group_id = (known after apply) 2026-04-09 01:38:46.467285 | orchestrator | + tenant_id = (known after apply) 2026-04-09 01:38:46.467293 | orchestrator | } 2026-04-09 01:38:46.467297 | orchestrator | 2026-04-09 01:38:46.467301 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-09 01:38:46.467310 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-09 01:38:46.467314 | orchestrator | + direction = "ingress" 2026-04-09 01:38:46.467318 | orchestrator | + ethertype = "IPv4" 2026-04-09 01:38:46.467321 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.467325 | orchestrator | + protocol = "icmp" 2026-04-09 01:38:46.467329 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.467333 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 01:38:46.467337 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 01:38:46.467341 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 01:38:46.467344 | orchestrator | + security_group_id = (known after apply) 2026-04-09 01:38:46.467348 | orchestrator | + tenant_id = (known after apply) 2026-04-09 01:38:46.467352 | orchestrator | } 2026-04-09 01:38:46.467356 | orchestrator | 2026-04-09 01:38:46.467360 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-09 01:38:46.467364 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-09 01:38:46.467367 | orchestrator | + direction = "ingress" 2026-04-09 01:38:46.467371 | orchestrator | + ethertype = "IPv4" 2026-04-09 01:38:46.467375 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.467379 | orchestrator | + protocol = "tcp" 2026-04-09 01:38:46.467383 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.467386 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 01:38:46.467394 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 01:38:46.467398 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 01:38:46.467402 | orchestrator | + security_group_id = (known after apply) 2026-04-09 01:38:46.467406 | orchestrator | + tenant_id = (known after apply) 2026-04-09 01:38:46.467410 | orchestrator | } 2026-04-09 01:38:46.467414 | orchestrator | 2026-04-09 01:38:46.467418 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-09 01:38:46.467423 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-09 01:38:46.467427 | orchestrator | + direction = "ingress" 2026-04-09 01:38:46.467431 | orchestrator | + ethertype = "IPv4" 2026-04-09 01:38:46.467437 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.467444 | orchestrator | + protocol = "udp" 2026-04-09 01:38:46.467449 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.467456 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 01:38:46.467462 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 01:38:46.467468 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 01:38:46.467474 | orchestrator | + security_group_id = (known after apply) 2026-04-09 01:38:46.467480 | orchestrator | + tenant_id = (known after apply) 2026-04-09 01:38:46.467486 | orchestrator | } 2026-04-09 01:38:46.467492 | orchestrator | 2026-04-09 01:38:46.467499 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-09 01:38:46.467505 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-09 01:38:46.467512 | orchestrator | + direction = "ingress" 2026-04-09 01:38:46.467522 | orchestrator | + ethertype = "IPv4" 2026-04-09 01:38:46.467528 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.467536 | orchestrator | + protocol = "icmp" 2026-04-09 01:38:46.467540 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.467544 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 01:38:46.467548 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 01:38:46.467552 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 01:38:46.467556 | orchestrator | + security_group_id = (known after apply) 2026-04-09 01:38:46.467560 | orchestrator | + tenant_id = (known after apply) 2026-04-09 01:38:46.467568 | orchestrator | } 2026-04-09 01:38:46.467572 | orchestrator | 2026-04-09 01:38:46.467576 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-09 01:38:46.467580 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-09 01:38:46.467584 | orchestrator | + description = "vrrp" 2026-04-09 01:38:46.467588 | orchestrator | + direction = "ingress" 2026-04-09 01:38:46.467592 | orchestrator | + ethertype = "IPv4" 2026-04-09 01:38:46.467596 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.467600 | orchestrator | + protocol = "112" 2026-04-09 01:38:46.467604 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.467608 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 01:38:46.467612 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 01:38:46.467616 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 01:38:46.467620 | orchestrator | + security_group_id = (known after apply) 2026-04-09 01:38:46.467624 | orchestrator | + tenant_id = (known after apply) 2026-04-09 01:38:46.467628 | orchestrator | } 2026-04-09 01:38:46.467632 | orchestrator | 2026-04-09 01:38:46.467640 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-09 01:38:46.467644 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-09 01:38:46.467648 | orchestrator | + all_tags = (known after apply) 2026-04-09 01:38:46.467653 | orchestrator | + description = "management security group" 2026-04-09 01:38:46.467656 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.467660 | orchestrator | + name = "testbed-management" 2026-04-09 01:38:46.467665 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.467669 | orchestrator | + stateful = (known after apply) 2026-04-09 01:38:46.467673 | orchestrator | + tenant_id = (known after apply) 2026-04-09 01:38:46.467677 | orchestrator | } 2026-04-09 01:38:46.467680 | orchestrator | 2026-04-09 01:38:46.467684 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-09 01:38:46.467689 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-09 01:38:46.467693 | orchestrator | + all_tags = (known after apply) 2026-04-09 01:38:46.467697 | orchestrator | + description = "node security group" 2026-04-09 01:38:46.467701 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.467705 | orchestrator | + name = "testbed-node" 2026-04-09 01:38:46.467709 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.467713 | orchestrator | + stateful = (known after apply) 2026-04-09 01:38:46.467717 | orchestrator | + tenant_id = (known after apply) 2026-04-09 01:38:46.467721 | orchestrator | } 2026-04-09 01:38:46.467726 | orchestrator | 2026-04-09 01:38:46.467730 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-09 01:38:46.467734 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-09 01:38:46.467738 | orchestrator | + all_tags = (known after apply) 2026-04-09 01:38:46.467742 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-09 01:38:46.467746 | orchestrator | + dns_nameservers = [ 2026-04-09 01:38:46.467750 | orchestrator | + "8.8.8.8", 2026-04-09 01:38:46.467754 | orchestrator | + "9.9.9.9", 2026-04-09 01:38:46.467758 | orchestrator | ] 2026-04-09 01:38:46.467762 | orchestrator | + enable_dhcp = true 2026-04-09 01:38:46.467766 | orchestrator | + gateway_ip = (known after apply) 2026-04-09 01:38:46.467770 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.467774 | orchestrator | + ip_version = 4 2026-04-09 01:38:46.467778 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-09 01:38:46.467782 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-09 01:38:46.467786 | orchestrator | + name = "subnet-testbed-management" 2026-04-09 01:38:46.467790 | orchestrator | + network_id = (known after apply) 2026-04-09 01:38:46.467794 | orchestrator | + no_gateway = false 2026-04-09 01:38:46.467798 | orchestrator | + region = (known after apply) 2026-04-09 01:38:46.467802 | orchestrator | + service_types = (known after apply) 2026-04-09 01:38:46.467809 | orchestrator | + tenant_id = (known after apply) 2026-04-09 01:38:46.467813 | orchestrator | 2026-04-09 01:38:46.467817 | orchestrator | + allocation_pool { 2026-04-09 01:38:46.467821 | orchestrator | + end = "192.168.31.250" 2026-04-09 01:38:46.467825 | orchestrator | + start = "192.168.31.200" 2026-04-09 01:38:46.467829 | orchestrator | } 2026-04-09 01:38:46.467833 | orchestrator | } 2026-04-09 01:38:46.467837 | orchestrator | 2026-04-09 01:38:46.467841 | orchestrator | # terraform_data.image will be created 2026-04-09 01:38:46.467845 | orchestrator | + resource "terraform_data" "image" { 2026-04-09 01:38:46.467849 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.467853 | orchestrator | + input = "Ubuntu 24.04" 2026-04-09 01:38:46.467857 | orchestrator | + output = (known after apply) 2026-04-09 01:38:46.467861 | orchestrator | } 2026-04-09 01:38:46.467865 | orchestrator | 2026-04-09 01:38:46.467869 | orchestrator | # terraform_data.image_node will be created 2026-04-09 01:38:46.467873 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-09 01:38:46.467877 | orchestrator | + id = (known after apply) 2026-04-09 01:38:46.467881 | orchestrator | + input = "Ubuntu 24.04" 2026-04-09 01:38:46.467885 | orchestrator | + output = (known after apply) 2026-04-09 01:38:46.467889 | orchestrator | } 2026-04-09 01:38:46.467893 | orchestrator | 2026-04-09 01:38:46.467897 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-09 01:38:46.467901 | orchestrator | 2026-04-09 01:38:46.467905 | orchestrator | Changes to Outputs: 2026-04-09 01:38:46.467910 | orchestrator | + manager_address = (sensitive value) 2026-04-09 01:38:46.467914 | orchestrator | + private_key = (sensitive value) 2026-04-09 01:38:46.593140 | orchestrator | terraform_data.image: Creating... 2026-04-09 01:38:46.731934 | orchestrator | terraform_data.image: Creation complete after 0s [id=b5b09bcc-d419-3f29-ca35-2eb3a6c855a3] 2026-04-09 01:38:46.731995 | orchestrator | terraform_data.image_node: Creating... 2026-04-09 01:38:46.732002 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=ca77078d-ff1d-b877-655a-5f1ef831d9e7] 2026-04-09 01:38:46.754251 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-09 01:38:46.754520 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-09 01:38:46.766772 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-09 01:38:46.766835 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-09 01:38:46.766841 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-09 01:38:46.766846 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-09 01:38:46.766850 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-09 01:38:46.766912 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-09 01:38:46.767487 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-09 01:38:46.772368 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-09 01:38:47.205977 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-09 01:38:47.210587 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-09 01:38:47.224589 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-09 01:38:47.231662 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-09 01:38:47.311772 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-04-09 01:38:47.317443 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-09 01:38:47.812119 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=89e2c8cf-9c36-4013-ac02-c5d85177d21d] 2026-04-09 01:38:47.824535 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-09 01:38:50.371062 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105] 2026-04-09 01:38:50.378184 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-09 01:38:50.382704 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=5d5b0f3e-c55a-4f41-a738-3802883821be] 2026-04-09 01:38:50.387027 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-09 01:38:50.395872 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=162ed735-fecb-4ea3-8d95-f21f614c20ad] 2026-04-09 01:38:50.407583 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-09 01:38:50.419617 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=1aa61eee-0aa0-422d-af75-f23cbcca004e] 2026-04-09 01:38:50.420413 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=82469e2d-64d1-4f4a-b9b3-b380ac500ec4] 2026-04-09 01:38:50.424669 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-09 01:38:50.426332 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-09 01:38:50.428227 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=ecc4ee99-00bb-43e9-af90-abdbfbfdafbf] 2026-04-09 01:38:50.437065 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-09 01:38:50.474397 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=e55aa834-7a03-4cc6-8559-f68ddba0a04d] 2026-04-09 01:38:50.476828 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=f5862b72-8b25-453b-aa97-7293a3d52761] 2026-04-09 01:38:50.501698 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-09 01:38:50.502517 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-09 01:38:50.506601 | orchestrator | local_file.id_rsa_pub: Creation complete after 1s [id=bc74f081742575648f383b4afaa0ca775ac16d64] 2026-04-09 01:38:50.518349 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-09 01:38:50.520694 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=8191df7bde18b151bce86fb4789c41f6582c0afa] 2026-04-09 01:38:50.523533 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=91609add-34c1-46d3-840a-9160ce481f74] 2026-04-09 01:38:51.166981 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=78f51fbd-2480-484a-bf4e-21c2c989255f] 2026-04-09 01:38:51.544609 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=dd8f47a0-5b06-4571-83b7-f3fedaed2227] 2026-04-09 01:38:51.547398 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-09 01:38:53.745458 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=9009f97f-5099-4efd-80df-b0fc690d20be] 2026-04-09 01:38:53.783945 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=a5bdaf30-515d-4ec5-b4e0-017d8e5d901e] 2026-04-09 01:38:53.803948 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=53e4edfb-5041-4373-b2f8-2931b10ee965] 2026-04-09 01:38:53.819575 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=dc1c8a18-4ba7-4c32-b16d-97b935c649ca] 2026-04-09 01:38:53.839077 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=0bd1f840-453a-48b2-ad16-1f5136864411] 2026-04-09 01:38:53.879111 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=482e14db-059a-45b3-acd4-80a1bc5c11af] 2026-04-09 01:38:53.883287 | orchestrator | openstack_networking_router_v2.router: Creation complete after 2s [id=0d8c1536-0c8d-448a-9e0e-da099be0e213] 2026-04-09 01:38:53.888110 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-09 01:38:53.888461 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-09 01:38:53.888531 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-09 01:38:54.059329 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=331e8559-4277-48f4-9d65-3d7efbd788de] 2026-04-09 01:38:54.071237 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=dc8d82ba-e3b9-4069-a573-39538b3b17ee] 2026-04-09 01:38:54.078321 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-09 01:38:54.081433 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-09 01:38:54.083799 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-09 01:38:54.084148 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-09 01:38:54.085848 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-09 01:38:54.088297 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-09 01:38:54.090116 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-09 01:38:54.092493 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-09 01:38:54.096618 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-09 01:38:54.225526 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=d5852bf9-feca-44aa-8834-0a90765a9e90] 2026-04-09 01:38:54.237402 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-09 01:38:54.372397 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=2c8aa494-d080-451c-baf1-69eaa3c05971] 2026-04-09 01:38:54.387183 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-09 01:38:54.516499 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=5a5c4a21-e757-46ed-99ba-82dc8906973d] 2026-04-09 01:38:54.524617 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-09 01:38:54.664224 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=3b26e58d-d71b-4129-b9bf-c2b6145a7b36] 2026-04-09 01:38:54.672096 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-09 01:38:54.674099 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=917da153-c1e2-4268-b5d4-3a290381f07e] 2026-04-09 01:38:54.681075 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-09 01:38:54.709583 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=d7608eef-e608-42c6-bf66-d35751b1ec0b] 2026-04-09 01:38:54.712645 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-09 01:38:54.786620 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=f79db8ad-1c4f-4b3a-8d4a-28f1520fcfb1] 2026-04-09 01:38:54.789936 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-09 01:38:54.879601 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=7e736ce5-966e-4891-a8dc-b63cc80967be] 2026-04-09 01:38:54.919305 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=ab3e455f-d410-4310-bdc0-1dc8d5730d50] 2026-04-09 01:38:54.924146 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=9d5b958c-7789-4411-8027-75190f4cedce] 2026-04-09 01:38:55.064182 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=6f3fc4b7-9dbb-4a80-b6d3-d3b161e66611] 2026-04-09 01:38:55.082405 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=94792bc2-1f7a-4681-be6c-e9d7c2420ae9] 2026-04-09 01:38:55.154911 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=b889e493-426f-4809-916c-d0b3470a49e1] 2026-04-09 01:38:55.203018 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=d490c038-7f47-41e7-b739-fc4edb2cca84] 2026-04-09 01:38:55.358093 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=3ffcf520-839a-46ad-bc08-253920244efd] 2026-04-09 01:38:55.506063 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=e00b0ebc-d116-485b-8c15-92e3ba1dc722] 2026-04-09 01:38:56.582317 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=5be994b4-6361-4abd-b76a-4fa45fa6565b] 2026-04-09 01:38:56.605993 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-09 01:38:56.626994 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-09 01:38:56.627149 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-09 01:38:56.637748 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-09 01:38:56.637878 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-09 01:38:56.637948 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-09 01:38:56.649290 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-09 01:38:58.069685 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=4bbf9326-1355-4461-ad1e-dfd817b57d79] 2026-04-09 01:38:58.080517 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-09 01:38:58.085392 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-09 01:38:58.085980 | orchestrator | local_file.inventory: Creating... 2026-04-09 01:38:58.092101 | orchestrator | local_file.inventory: Creation complete after 0s [id=8dc2944875ee2e2e189d0c5ffdb835802cfaa03f] 2026-04-09 01:38:58.093434 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=2cdc2438bd487d6c6c2aa0029af3cd1d6551dfe5] 2026-04-09 01:38:58.864826 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=4bbf9326-1355-4461-ad1e-dfd817b57d79] 2026-04-09 01:39:06.628141 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-09 01:39:06.628350 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-09 01:39:06.641770 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-09 01:39:06.641878 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-09 01:39:06.650127 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-09 01:39:06.652427 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-09 01:39:16.628378 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-09 01:39:16.628499 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-09 01:39:16.642844 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-09 01:39:16.642958 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-09 01:39:16.651150 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-09 01:39:16.653356 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-09 01:39:17.049681 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=dbf09612-4d30-4521-8070-8fb248327645] 2026-04-09 01:39:17.142608 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=24f6201f-f8d3-4a40-8a18-7fd398793546] 2026-04-09 01:39:17.145362 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=7c173230-ab31-40de-8795-0bf7c85ddd9d] 2026-04-09 01:39:17.698452 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=688c82af-5155-48bf-93da-5d0be27515b2] 2026-04-09 01:39:26.643301 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-04-09 01:39:26.643386 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-04-09 01:39:27.381126 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=930c5b26-d9c5-4235-ad8a-846312eabc30] 2026-04-09 01:39:28.310509 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=e6355513-8374-4afc-b85e-aa7af800cac8] 2026-04-09 01:39:28.324306 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-09 01:39:28.337816 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=1379854698011969573] 2026-04-09 01:39:28.337901 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-09 01:39:28.337911 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-09 01:39:28.339007 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-09 01:39:28.339430 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-09 01:39:28.339568 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-09 01:39:28.340518 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-09 01:39:28.365790 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-09 01:39:28.395957 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-09 01:39:28.397013 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-09 01:39:28.398488 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-09 01:39:31.752476 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=dbf09612-4d30-4521-8070-8fb248327645/60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105] 2026-04-09 01:39:31.767843 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=24f6201f-f8d3-4a40-8a18-7fd398793546/162ed735-fecb-4ea3-8d95-f21f614c20ad] 2026-04-09 01:39:31.780548 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=7c173230-ab31-40de-8795-0bf7c85ddd9d/e55aa834-7a03-4cc6-8559-f68ddba0a04d] 2026-04-09 01:39:31.807244 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=dbf09612-4d30-4521-8070-8fb248327645/ecc4ee99-00bb-43e9-af90-abdbfbfdafbf] 2026-04-09 01:39:31.810360 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=7c173230-ab31-40de-8795-0bf7c85ddd9d/82469e2d-64d1-4f4a-b9b3-b380ac500ec4] 2026-04-09 01:39:31.819832 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=24f6201f-f8d3-4a40-8a18-7fd398793546/5d5b0f3e-c55a-4f41-a738-3802883821be] 2026-04-09 01:39:37.905628 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=dbf09612-4d30-4521-8070-8fb248327645/91609add-34c1-46d3-840a-9160ce481f74] 2026-04-09 01:39:37.907614 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=7c173230-ab31-40de-8795-0bf7c85ddd9d/1aa61eee-0aa0-422d-af75-f23cbcca004e] 2026-04-09 01:39:37.941684 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=24f6201f-f8d3-4a40-8a18-7fd398793546/f5862b72-8b25-453b-aa97-7293a3d52761] 2026-04-09 01:39:38.399925 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-09 01:39:48.400643 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-09 01:39:48.760676 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=e51307ec-6974-4311-b179-76ff79ac6a90] 2026-04-09 01:39:48.778876 | orchestrator | 2026-04-09 01:39:48.778944 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-09 01:39:48.778987 | orchestrator | 2026-04-09 01:39:48.778998 | orchestrator | Outputs: 2026-04-09 01:39:48.779007 | orchestrator | 2026-04-09 01:39:48.779038 | orchestrator | manager_address = 2026-04-09 01:39:48.779049 | orchestrator | private_key = 2026-04-09 01:39:49.220965 | orchestrator | ok: Runtime: 0:01:06.687560 2026-04-09 01:39:49.253265 | 2026-04-09 01:39:49.253412 | TASK [Fetch manager address] 2026-04-09 01:39:49.744746 | orchestrator | ok 2026-04-09 01:39:49.753968 | 2026-04-09 01:39:49.754089 | TASK [Set manager_host address] 2026-04-09 01:39:49.829849 | orchestrator | ok 2026-04-09 01:39:49.839571 | 2026-04-09 01:39:49.839700 | LOOP [Update ansible collections] 2026-04-09 01:39:51.499152 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-09 01:39:51.499626 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-09 01:39:51.499695 | orchestrator | Starting galaxy collection install process 2026-04-09 01:39:51.499739 | orchestrator | Process install dependency map 2026-04-09 01:39:51.499778 | orchestrator | Starting collection install process 2026-04-09 01:39:51.499814 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2026-04-09 01:39:51.499854 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2026-04-09 01:39:51.499896 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-09 01:39:51.499986 | orchestrator | ok: Item: commons Runtime: 0:00:01.326500 2026-04-09 01:39:52.675415 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-09 01:39:52.675592 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-09 01:39:52.675646 | orchestrator | Starting galaxy collection install process 2026-04-09 01:39:52.675686 | orchestrator | Process install dependency map 2026-04-09 01:39:52.675723 | orchestrator | Starting collection install process 2026-04-09 01:39:52.675760 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2026-04-09 01:39:52.675813 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2026-04-09 01:39:52.675849 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-09 01:39:52.675902 | orchestrator | ok: Item: services Runtime: 0:00:00.796000 2026-04-09 01:39:52.695214 | 2026-04-09 01:39:52.695377 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-09 01:40:03.330680 | orchestrator | ok 2026-04-09 01:40:03.345694 | 2026-04-09 01:40:03.345840 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-09 01:41:03.395723 | orchestrator | ok 2026-04-09 01:41:03.406271 | 2026-04-09 01:41:03.406414 | TASK [Fetch manager ssh hostkey] 2026-04-09 01:41:04.984260 | orchestrator | Output suppressed because no_log was given 2026-04-09 01:41:05.000091 | 2026-04-09 01:41:05.000269 | TASK [Get ssh keypair from terraform environment] 2026-04-09 01:41:05.537819 | orchestrator | ok: Runtime: 0:00:00.012941 2026-04-09 01:41:05.554868 | 2026-04-09 01:41:05.555122 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-09 01:41:05.594888 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-09 01:41:05.605450 | 2026-04-09 01:41:05.605642 | TASK [Run manager part 0] 2026-04-09 01:41:06.609295 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-09 01:41:06.752356 | orchestrator | 2026-04-09 01:41:06.752412 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-09 01:41:06.752421 | orchestrator | 2026-04-09 01:41:06.752437 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-09 01:41:09.036129 | orchestrator | ok: [testbed-manager] 2026-04-09 01:41:09.036193 | orchestrator | 2026-04-09 01:41:09.036279 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-09 01:41:09.036293 | orchestrator | 2026-04-09 01:41:09.036306 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 01:41:11.086808 | orchestrator | ok: [testbed-manager] 2026-04-09 01:41:11.098335 | orchestrator | 2026-04-09 01:41:11.098373 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-09 01:41:11.854112 | orchestrator | ok: [testbed-manager] 2026-04-09 01:41:11.854187 | orchestrator | 2026-04-09 01:41:11.854224 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-09 01:41:11.908815 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:41:11.908862 | orchestrator | 2026-04-09 01:41:11.908874 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-09 01:41:11.940567 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:41:11.940608 | orchestrator | 2026-04-09 01:41:11.940618 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-09 01:41:11.977125 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:41:11.977162 | orchestrator | 2026-04-09 01:41:11.977170 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-09 01:41:12.833239 | orchestrator | changed: [testbed-manager] 2026-04-09 01:41:12.833293 | orchestrator | 2026-04-09 01:41:12.833301 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-09 01:44:16.508676 | orchestrator | changed: [testbed-manager] 2026-04-09 01:44:16.508742 | orchestrator | 2026-04-09 01:44:16.508754 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-09 01:45:50.321791 | orchestrator | changed: [testbed-manager] 2026-04-09 01:45:50.321918 | orchestrator | 2026-04-09 01:45:50.321939 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-09 01:46:15.821519 | orchestrator | changed: [testbed-manager] 2026-04-09 01:46:15.821590 | orchestrator | 2026-04-09 01:46:15.821606 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-09 01:46:25.947463 | orchestrator | changed: [testbed-manager] 2026-04-09 01:46:25.948470 | orchestrator | 2026-04-09 01:46:25.948508 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-09 01:46:25.998396 | orchestrator | ok: [testbed-manager] 2026-04-09 01:46:25.998469 | orchestrator | 2026-04-09 01:46:25.998485 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-09 01:46:26.844695 | orchestrator | ok: [testbed-manager] 2026-04-09 01:46:26.844736 | orchestrator | 2026-04-09 01:46:26.844745 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-09 01:46:27.651629 | orchestrator | changed: [testbed-manager] 2026-04-09 01:46:27.651681 | orchestrator | 2026-04-09 01:46:27.651694 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-09 01:46:34.500587 | orchestrator | changed: [testbed-manager] 2026-04-09 01:46:34.500667 | orchestrator | 2026-04-09 01:46:34.500691 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-09 01:46:41.164322 | orchestrator | changed: [testbed-manager] 2026-04-09 01:46:41.165271 | orchestrator | 2026-04-09 01:46:41.165310 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-09 01:46:44.082797 | orchestrator | changed: [testbed-manager] 2026-04-09 01:46:44.082866 | orchestrator | 2026-04-09 01:46:44.082876 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-09 01:46:46.047659 | orchestrator | changed: [testbed-manager] 2026-04-09 01:46:46.047715 | orchestrator | 2026-04-09 01:46:46.047727 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-09 01:46:47.204702 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-09 01:46:47.204825 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-09 01:46:47.204832 | orchestrator | 2026-04-09 01:46:47.204839 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-09 01:46:47.238553 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-09 01:46:47.238601 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-09 01:46:47.238607 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-09 01:46:47.238613 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-09 01:46:52.340529 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-09 01:46:52.340569 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-09 01:46:52.340574 | orchestrator | 2026-04-09 01:46:52.340579 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-09 01:46:52.933808 | orchestrator | changed: [testbed-manager] 2026-04-09 01:46:52.998230 | orchestrator | 2026-04-09 01:46:52.998295 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-09 01:49:13.158939 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-09 01:49:13.159061 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-09 01:49:13.159079 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-09 01:49:13.159091 | orchestrator | 2026-04-09 01:49:13.159103 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-09 01:49:15.618257 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-09 01:49:15.618294 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-09 01:49:15.618299 | orchestrator | 2026-04-09 01:49:15.618305 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-09 01:49:15.618310 | orchestrator | 2026-04-09 01:49:15.618314 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 01:49:17.069482 | orchestrator | ok: [testbed-manager] 2026-04-09 01:49:17.069520 | orchestrator | 2026-04-09 01:49:17.069526 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-09 01:49:17.104528 | orchestrator | ok: [testbed-manager] 2026-04-09 01:49:17.104563 | orchestrator | 2026-04-09 01:49:17.104569 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-09 01:49:17.184545 | orchestrator | ok: [testbed-manager] 2026-04-09 01:49:17.184595 | orchestrator | 2026-04-09 01:49:17.184605 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-09 01:49:18.047531 | orchestrator | changed: [testbed-manager] 2026-04-09 01:49:18.047581 | orchestrator | 2026-04-09 01:49:18.047591 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-09 01:49:18.791419 | orchestrator | changed: [testbed-manager] 2026-04-09 01:49:18.791602 | orchestrator | 2026-04-09 01:49:18.791631 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-09 01:49:20.259247 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-09 01:49:20.259344 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-09 01:49:20.259364 | orchestrator | 2026-04-09 01:49:20.259382 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-09 01:49:21.730827 | orchestrator | changed: [testbed-manager] 2026-04-09 01:49:21.730926 | orchestrator | 2026-04-09 01:49:21.730940 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-09 01:49:23.565142 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 01:49:23.565182 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-09 01:49:23.565196 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-09 01:49:23.565201 | orchestrator | 2026-04-09 01:49:23.565207 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-09 01:49:23.617581 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:49:23.617661 | orchestrator | 2026-04-09 01:49:23.617671 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-09 01:49:23.686135 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:49:23.686195 | orchestrator | 2026-04-09 01:49:23.686204 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-09 01:49:24.275165 | orchestrator | changed: [testbed-manager] 2026-04-09 01:49:24.275240 | orchestrator | 2026-04-09 01:49:24.275252 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-09 01:49:24.350191 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:49:24.350274 | orchestrator | 2026-04-09 01:49:24.350285 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-09 01:49:25.265337 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-09 01:49:25.265463 | orchestrator | changed: [testbed-manager] 2026-04-09 01:49:25.265477 | orchestrator | 2026-04-09 01:49:25.265486 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-09 01:49:25.305017 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:49:25.305091 | orchestrator | 2026-04-09 01:49:25.305100 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-09 01:49:25.335890 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:49:25.335960 | orchestrator | 2026-04-09 01:49:25.335970 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-09 01:49:25.377910 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:49:25.377993 | orchestrator | 2026-04-09 01:49:25.378006 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-09 01:49:25.455632 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:49:25.455727 | orchestrator | 2026-04-09 01:49:25.455746 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-09 01:49:26.238896 | orchestrator | ok: [testbed-manager] 2026-04-09 01:49:26.238990 | orchestrator | 2026-04-09 01:49:26.239007 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-09 01:49:26.239019 | orchestrator | 2026-04-09 01:49:26.239032 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 01:49:27.682420 | orchestrator | ok: [testbed-manager] 2026-04-09 01:49:27.682481 | orchestrator | 2026-04-09 01:49:27.682487 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-09 01:49:28.645124 | orchestrator | changed: [testbed-manager] 2026-04-09 01:49:28.645172 | orchestrator | 2026-04-09 01:49:28.645179 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:49:28.645188 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-09 01:49:28.645194 | orchestrator | 2026-04-09 01:49:29.106975 | orchestrator | ok: Runtime: 0:08:22.862681 2026-04-09 01:49:29.124779 | 2026-04-09 01:49:29.124937 | TASK [Point out that the log in on the manager is now possible] 2026-04-09 01:49:29.164927 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-09 01:49:29.174941 | 2026-04-09 01:49:29.175082 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-09 01:49:29.224047 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-09 01:49:29.231208 | 2026-04-09 01:49:29.231327 | TASK [Run manager part 1 + 2] 2026-04-09 01:49:30.136404 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-09 01:49:30.199937 | orchestrator | 2026-04-09 01:49:30.200001 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-09 01:49:30.200012 | orchestrator | 2026-04-09 01:49:30.200030 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 01:49:33.308312 | orchestrator | ok: [testbed-manager] 2026-04-09 01:49:33.308363 | orchestrator | 2026-04-09 01:49:33.308382 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-09 01:49:33.346464 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:49:33.346520 | orchestrator | 2026-04-09 01:49:33.346530 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-09 01:49:33.391152 | orchestrator | ok: [testbed-manager] 2026-04-09 01:49:33.391210 | orchestrator | 2026-04-09 01:49:33.391220 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-09 01:49:33.437534 | orchestrator | ok: [testbed-manager] 2026-04-09 01:49:33.437587 | orchestrator | 2026-04-09 01:49:33.437597 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-09 01:49:33.507848 | orchestrator | ok: [testbed-manager] 2026-04-09 01:49:33.507911 | orchestrator | 2026-04-09 01:49:33.507922 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-09 01:49:33.576895 | orchestrator | ok: [testbed-manager] 2026-04-09 01:49:33.576954 | orchestrator | 2026-04-09 01:49:33.576965 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-09 01:49:33.630712 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-09 01:49:33.630772 | orchestrator | 2026-04-09 01:49:33.630781 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-09 01:49:34.391938 | orchestrator | ok: [testbed-manager] 2026-04-09 01:49:34.392021 | orchestrator | 2026-04-09 01:49:34.392035 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-09 01:49:34.438925 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:49:34.438987 | orchestrator | 2026-04-09 01:49:34.438999 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-09 01:49:35.913034 | orchestrator | changed: [testbed-manager] 2026-04-09 01:49:35.913102 | orchestrator | 2026-04-09 01:49:35.913113 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-09 01:49:36.548993 | orchestrator | ok: [testbed-manager] 2026-04-09 01:49:36.549050 | orchestrator | 2026-04-09 01:49:36.549058 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-09 01:49:37.800807 | orchestrator | changed: [testbed-manager] 2026-04-09 01:49:37.800871 | orchestrator | 2026-04-09 01:49:37.800886 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-09 01:49:54.625107 | orchestrator | changed: [testbed-manager] 2026-04-09 01:49:54.625351 | orchestrator | 2026-04-09 01:49:54.625389 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-09 01:49:55.373989 | orchestrator | ok: [testbed-manager] 2026-04-09 01:49:55.374068 | orchestrator | 2026-04-09 01:49:55.374077 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-09 01:49:55.432015 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:49:55.432075 | orchestrator | 2026-04-09 01:49:55.432089 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-09 01:49:56.463002 | orchestrator | changed: [testbed-manager] 2026-04-09 01:49:56.463081 | orchestrator | 2026-04-09 01:49:56.463092 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-09 01:49:57.449991 | orchestrator | changed: [testbed-manager] 2026-04-09 01:49:57.450059 | orchestrator | 2026-04-09 01:49:57.450067 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-09 01:49:58.046335 | orchestrator | changed: [testbed-manager] 2026-04-09 01:49:58.046380 | orchestrator | 2026-04-09 01:49:58.046389 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-09 01:49:58.090578 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-09 01:49:58.090667 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-09 01:49:58.090675 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-09 01:49:58.090680 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-09 01:50:01.122047 | orchestrator | changed: [testbed-manager] 2026-04-09 01:50:01.122100 | orchestrator | 2026-04-09 01:50:01.122108 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-09 01:50:11.363251 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-09 01:50:11.363341 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-09 01:50:11.363355 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-09 01:50:11.363364 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-09 01:50:11.363379 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-09 01:50:11.363387 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-09 01:50:11.363395 | orchestrator | 2026-04-09 01:50:11.363404 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-09 01:50:12.470092 | orchestrator | changed: [testbed-manager] 2026-04-09 01:50:12.470198 | orchestrator | 2026-04-09 01:50:12.470213 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-09 01:50:15.504688 | orchestrator | changed: [testbed-manager] 2026-04-09 01:50:15.505435 | orchestrator | 2026-04-09 01:50:15.505504 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-09 01:50:15.541395 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:50:15.541507 | orchestrator | 2026-04-09 01:50:15.541519 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-09 01:52:05.754010 | orchestrator | changed: [testbed-manager] 2026-04-09 01:52:05.754114 | orchestrator | 2026-04-09 01:52:05.754124 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-09 01:52:07.020362 | orchestrator | ok: [testbed-manager] 2026-04-09 01:52:07.020422 | orchestrator | 2026-04-09 01:52:07.020436 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:52:07.020446 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-09 01:52:07.020455 | orchestrator | 2026-04-09 01:52:07.378278 | orchestrator | ok: Runtime: 0:02:37.576233 2026-04-09 01:52:07.395625 | 2026-04-09 01:52:07.395830 | TASK [Reboot manager] 2026-04-09 01:52:08.934422 | orchestrator | ok: Runtime: 0:00:00.995813 2026-04-09 01:52:08.952255 | 2026-04-09 01:52:08.952410 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-09 01:52:25.383316 | orchestrator | ok 2026-04-09 01:52:25.391871 | 2026-04-09 01:52:25.391976 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-09 01:53:25.433518 | orchestrator | ok 2026-04-09 01:53:25.441661 | 2026-04-09 01:53:25.441778 | TASK [Deploy manager + bootstrap nodes] 2026-04-09 01:53:28.427074 | orchestrator | 2026-04-09 01:53:28.427203 | orchestrator | # DEPLOY MANAGER 2026-04-09 01:53:28.427213 | orchestrator | 2026-04-09 01:53:28.427219 | orchestrator | + set -e 2026-04-09 01:53:28.427224 | orchestrator | + echo 2026-04-09 01:53:28.427230 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-09 01:53:28.427237 | orchestrator | + echo 2026-04-09 01:53:28.427294 | orchestrator | + cat /opt/manager-vars.sh 2026-04-09 01:53:28.431817 | orchestrator | export NUMBER_OF_NODES=6 2026-04-09 01:53:28.431876 | orchestrator | 2026-04-09 01:53:28.431885 | orchestrator | export CEPH_VERSION=reef 2026-04-09 01:53:28.431893 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-09 01:53:28.431901 | orchestrator | export MANAGER_VERSION=9.5.0 2026-04-09 01:53:28.431919 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-04-09 01:53:28.431928 | orchestrator | 2026-04-09 01:53:28.431942 | orchestrator | export ARA=false 2026-04-09 01:53:28.431952 | orchestrator | export DEPLOY_MODE=manager 2026-04-09 01:53:28.431964 | orchestrator | export TEMPEST=false 2026-04-09 01:53:28.431973 | orchestrator | export IS_ZUUL=true 2026-04-09 01:53:28.431981 | orchestrator | 2026-04-09 01:53:28.431995 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 01:53:28.432005 | orchestrator | export EXTERNAL_API=false 2026-04-09 01:53:28.432013 | orchestrator | 2026-04-09 01:53:28.432021 | orchestrator | export IMAGE_USER=ubuntu 2026-04-09 01:53:28.432033 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-09 01:53:28.432041 | orchestrator | 2026-04-09 01:53:28.432049 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-09 01:53:28.432363 | orchestrator | 2026-04-09 01:53:28.432379 | orchestrator | + echo 2026-04-09 01:53:28.432390 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 01:53:28.433727 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 01:53:28.433777 | orchestrator | ++ INTERACTIVE=false 2026-04-09 01:53:28.433786 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 01:53:28.433794 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 01:53:28.433977 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 01:53:28.433989 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 01:53:28.433996 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 01:53:28.434002 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-09 01:53:28.434008 | orchestrator | ++ CEPH_VERSION=reef 2026-04-09 01:53:28.434052 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 01:53:28.434063 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 01:53:28.434306 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-09 01:53:28.434323 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-09 01:53:28.434330 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-09 01:53:28.434352 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-09 01:53:28.434366 | orchestrator | ++ export ARA=false 2026-04-09 01:53:28.434372 | orchestrator | ++ ARA=false 2026-04-09 01:53:28.434379 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 01:53:28.434385 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 01:53:28.434392 | orchestrator | ++ export TEMPEST=false 2026-04-09 01:53:28.434399 | orchestrator | ++ TEMPEST=false 2026-04-09 01:53:28.434405 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 01:53:28.434412 | orchestrator | ++ IS_ZUUL=true 2026-04-09 01:53:28.434418 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 01:53:28.434425 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 01:53:28.434431 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 01:53:28.434437 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 01:53:28.434443 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 01:53:28.434449 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 01:53:28.434460 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 01:53:28.434466 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 01:53:28.434472 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 01:53:28.434478 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 01:53:28.434485 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-09 01:53:28.491390 | orchestrator | + docker version 2026-04-09 01:53:28.629455 | orchestrator | Client: Docker Engine - Community 2026-04-09 01:53:28.629592 | orchestrator | Version: 27.5.1 2026-04-09 01:53:28.629607 | orchestrator | API version: 1.47 2026-04-09 01:53:28.629615 | orchestrator | Go version: go1.22.11 2026-04-09 01:53:28.629623 | orchestrator | Git commit: 9f9e405 2026-04-09 01:53:28.629630 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-09 01:53:28.629639 | orchestrator | OS/Arch: linux/amd64 2026-04-09 01:53:28.629646 | orchestrator | Context: default 2026-04-09 01:53:28.629653 | orchestrator | 2026-04-09 01:53:28.629661 | orchestrator | Server: Docker Engine - Community 2026-04-09 01:53:28.629669 | orchestrator | Engine: 2026-04-09 01:53:28.629677 | orchestrator | Version: 27.5.1 2026-04-09 01:53:28.629684 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-09 01:53:28.629732 | orchestrator | Go version: go1.22.11 2026-04-09 01:53:28.629741 | orchestrator | Git commit: 4c9b3b0 2026-04-09 01:53:28.629748 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-09 01:53:28.629756 | orchestrator | OS/Arch: linux/amd64 2026-04-09 01:53:28.629763 | orchestrator | Experimental: false 2026-04-09 01:53:28.629771 | orchestrator | containerd: 2026-04-09 01:53:28.629779 | orchestrator | Version: v2.2.2 2026-04-09 01:53:28.629786 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-04-09 01:53:28.629794 | orchestrator | runc: 2026-04-09 01:53:28.629801 | orchestrator | Version: 1.3.4 2026-04-09 01:53:28.629809 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-04-09 01:53:28.629816 | orchestrator | docker-init: 2026-04-09 01:53:28.629823 | orchestrator | Version: 0.19.0 2026-04-09 01:53:28.629831 | orchestrator | GitCommit: de40ad0 2026-04-09 01:53:28.634597 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-09 01:53:28.643008 | orchestrator | + set -e 2026-04-09 01:53:28.643106 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 01:53:28.643124 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 01:53:28.643137 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 01:53:28.643149 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-09 01:53:28.643161 | orchestrator | ++ CEPH_VERSION=reef 2026-04-09 01:53:28.643174 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 01:53:28.643187 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 01:53:28.643199 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-09 01:53:28.643209 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-09 01:53:28.643222 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-09 01:53:28.643234 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-09 01:53:28.643245 | orchestrator | ++ export ARA=false 2026-04-09 01:53:28.643256 | orchestrator | ++ ARA=false 2026-04-09 01:53:28.643268 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 01:53:28.643279 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 01:53:28.643291 | orchestrator | ++ export TEMPEST=false 2026-04-09 01:53:28.643303 | orchestrator | ++ TEMPEST=false 2026-04-09 01:53:28.643314 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 01:53:28.643325 | orchestrator | ++ IS_ZUUL=true 2026-04-09 01:53:28.643337 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 01:53:28.643350 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 01:53:28.643362 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 01:53:28.643374 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 01:53:28.643385 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 01:53:28.643396 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 01:53:28.643408 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 01:53:28.643419 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 01:53:28.643431 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 01:53:28.643443 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 01:53:28.643454 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 01:53:28.643465 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 01:53:28.643476 | orchestrator | ++ INTERACTIVE=false 2026-04-09 01:53:28.643487 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 01:53:28.643504 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 01:53:28.643554 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-04-09 01:53:28.643568 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-04-09 01:53:28.650872 | orchestrator | + set -e 2026-04-09 01:53:28.650952 | orchestrator | + VERSION=9.5.0 2026-04-09 01:53:28.650968 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-04-09 01:53:28.659986 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-04-09 01:53:28.660056 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-09 01:53:28.664451 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-09 01:53:28.668222 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-09 01:53:28.674649 | orchestrator | /opt/configuration ~ 2026-04-09 01:53:28.674711 | orchestrator | + set -e 2026-04-09 01:53:28.674719 | orchestrator | + pushd /opt/configuration 2026-04-09 01:53:28.674726 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-09 01:53:28.678414 | orchestrator | + source /opt/venv/bin/activate 2026-04-09 01:53:28.679621 | orchestrator | ++ deactivate nondestructive 2026-04-09 01:53:28.680553 | orchestrator | ++ '[' -n '' ']' 2026-04-09 01:53:28.680587 | orchestrator | ++ '[' -n '' ']' 2026-04-09 01:53:28.680618 | orchestrator | ++ hash -r 2026-04-09 01:53:28.680624 | orchestrator | ++ '[' -n '' ']' 2026-04-09 01:53:28.680630 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-09 01:53:28.680636 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-09 01:53:28.680642 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-09 01:53:28.680648 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-09 01:53:28.680655 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-09 01:53:28.680660 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-09 01:53:28.680666 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-09 01:53:28.680673 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-09 01:53:28.680680 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-09 01:53:28.680685 | orchestrator | ++ export PATH 2026-04-09 01:53:28.680692 | orchestrator | ++ '[' -n '' ']' 2026-04-09 01:53:28.680698 | orchestrator | ++ '[' -z '' ']' 2026-04-09 01:53:28.680704 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-09 01:53:28.680709 | orchestrator | ++ PS1='(venv) ' 2026-04-09 01:53:28.680715 | orchestrator | ++ export PS1 2026-04-09 01:53:28.680720 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-09 01:53:28.680726 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-09 01:53:28.680733 | orchestrator | ++ hash -r 2026-04-09 01:53:28.680739 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-09 01:53:30.246201 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-09 01:53:30.247296 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-09 01:53:30.248846 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-09 01:53:30.250599 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-09 01:53:30.251682 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-04-09 01:53:30.262730 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-09 01:53:30.264394 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-09 01:53:30.265625 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-09 01:53:30.267143 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-09 01:53:30.304326 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-09 01:53:30.305590 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-09 01:53:30.307503 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-09 01:53:30.308848 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-09 01:53:30.313120 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-09 01:53:30.554670 | orchestrator | ++ which gilt 2026-04-09 01:53:30.559685 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-09 01:53:30.559756 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-09 01:53:30.847906 | orchestrator | osism.cfg-generics: 2026-04-09 01:53:31.004745 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-09 01:53:31.004868 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-09 01:53:31.004945 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-09 01:53:31.005442 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-09 01:53:32.223851 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-09 01:53:32.235585 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-09 01:53:32.592674 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-09 01:53:32.650128 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-09 01:53:32.650270 | orchestrator | + deactivate 2026-04-09 01:53:32.650290 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-09 01:53:32.650302 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-09 01:53:32.650311 | orchestrator | + export PATH 2026-04-09 01:53:32.650327 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-09 01:53:32.650344 | orchestrator | + '[' -n '' ']' 2026-04-09 01:53:32.650363 | orchestrator | + hash -r 2026-04-09 01:53:32.650380 | orchestrator | + '[' -n '' ']' 2026-04-09 01:53:32.650396 | orchestrator | + unset VIRTUAL_ENV 2026-04-09 01:53:32.650426 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-09 01:53:32.650453 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-09 01:53:32.650470 | orchestrator | + unset -f deactivate 2026-04-09 01:53:32.650484 | orchestrator | + popd 2026-04-09 01:53:32.650516 | orchestrator | ~ 2026-04-09 01:53:32.652473 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-09 01:53:32.652566 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-09 01:53:32.652697 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-09 01:53:32.716981 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-09 01:53:32.717073 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-09 01:53:32.718466 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-09 01:53:32.791838 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-09 01:53:32.792436 | orchestrator | ++ semver 2024.2 2025.1 2026-04-09 01:53:32.864196 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-09 01:53:32.864307 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-09 01:53:32.968174 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-09 01:53:32.968278 | orchestrator | + source /opt/venv/bin/activate 2026-04-09 01:53:32.968301 | orchestrator | ++ deactivate nondestructive 2026-04-09 01:53:32.968322 | orchestrator | ++ '[' -n '' ']' 2026-04-09 01:53:32.968340 | orchestrator | ++ '[' -n '' ']' 2026-04-09 01:53:32.968356 | orchestrator | ++ hash -r 2026-04-09 01:53:32.968370 | orchestrator | ++ '[' -n '' ']' 2026-04-09 01:53:32.968381 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-09 01:53:32.968391 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-09 01:53:32.968401 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-09 01:53:32.968411 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-09 01:53:32.968422 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-09 01:53:32.968432 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-09 01:53:32.968444 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-09 01:53:32.968462 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-09 01:53:32.968516 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-09 01:53:32.968569 | orchestrator | ++ export PATH 2026-04-09 01:53:32.968580 | orchestrator | ++ '[' -n '' ']' 2026-04-09 01:53:32.968590 | orchestrator | ++ '[' -z '' ']' 2026-04-09 01:53:32.968600 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-09 01:53:32.968610 | orchestrator | ++ PS1='(venv) ' 2026-04-09 01:53:32.968620 | orchestrator | ++ export PS1 2026-04-09 01:53:32.968630 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-09 01:53:32.968640 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-09 01:53:32.968650 | orchestrator | ++ hash -r 2026-04-09 01:53:32.968660 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-09 01:53:34.438284 | orchestrator | 2026-04-09 01:53:34.438362 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-09 01:53:34.438369 | orchestrator | 2026-04-09 01:53:34.438373 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-09 01:53:35.083167 | orchestrator | ok: [testbed-manager] 2026-04-09 01:53:35.083257 | orchestrator | 2026-04-09 01:53:35.083270 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-09 01:53:36.264960 | orchestrator | changed: [testbed-manager] 2026-04-09 01:53:36.265059 | orchestrator | 2026-04-09 01:53:36.265073 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-09 01:53:36.265105 | orchestrator | 2026-04-09 01:53:36.265114 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 01:53:38.971033 | orchestrator | ok: [testbed-manager] 2026-04-09 01:53:38.971162 | orchestrator | 2026-04-09 01:53:38.971190 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-09 01:53:39.025049 | orchestrator | ok: [testbed-manager] 2026-04-09 01:53:39.025148 | orchestrator | 2026-04-09 01:53:39.025167 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-09 01:53:39.561328 | orchestrator | changed: [testbed-manager] 2026-04-09 01:53:39.561429 | orchestrator | 2026-04-09 01:53:39.561448 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-09 01:53:39.610692 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:53:39.610771 | orchestrator | 2026-04-09 01:53:39.610781 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-09 01:53:40.049657 | orchestrator | changed: [testbed-manager] 2026-04-09 01:53:40.049758 | orchestrator | 2026-04-09 01:53:40.049776 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-09 01:53:40.434306 | orchestrator | ok: [testbed-manager] 2026-04-09 01:53:40.434375 | orchestrator | 2026-04-09 01:53:40.434382 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-09 01:53:40.576116 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:53:40.576195 | orchestrator | 2026-04-09 01:53:40.576204 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-09 01:53:40.576211 | orchestrator | 2026-04-09 01:53:40.576217 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 01:53:42.535796 | orchestrator | ok: [testbed-manager] 2026-04-09 01:53:42.535876 | orchestrator | 2026-04-09 01:53:42.535884 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-09 01:53:42.689731 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-09 01:53:42.689805 | orchestrator | 2026-04-09 01:53:42.689812 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-09 01:53:42.769715 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-09 01:53:42.769806 | orchestrator | 2026-04-09 01:53:42.769820 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-09 01:53:44.044407 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-09 01:53:44.044516 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-09 01:53:44.044558 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-09 01:53:44.044573 | orchestrator | 2026-04-09 01:53:44.044585 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-09 01:53:46.253129 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-09 01:53:46.253224 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-09 01:53:46.253236 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-09 01:53:46.253246 | orchestrator | 2026-04-09 01:53:46.253257 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-09 01:53:46.998799 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-09 01:53:46.998902 | orchestrator | changed: [testbed-manager] 2026-04-09 01:53:46.998919 | orchestrator | 2026-04-09 01:53:46.998932 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-09 01:53:47.762154 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-09 01:53:47.762713 | orchestrator | changed: [testbed-manager] 2026-04-09 01:53:47.762748 | orchestrator | 2026-04-09 01:53:47.762772 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-09 01:53:47.821469 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:53:47.821579 | orchestrator | 2026-04-09 01:53:47.821594 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-09 01:53:48.175506 | orchestrator | ok: [testbed-manager] 2026-04-09 01:53:48.175635 | orchestrator | 2026-04-09 01:53:48.175652 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-09 01:53:48.250652 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-09 01:53:48.250733 | orchestrator | 2026-04-09 01:53:48.250748 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-09 01:53:49.351232 | orchestrator | changed: [testbed-manager] 2026-04-09 01:53:49.351320 | orchestrator | 2026-04-09 01:53:49.351335 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-09 01:53:50.238638 | orchestrator | changed: [testbed-manager] 2026-04-09 01:53:50.238752 | orchestrator | 2026-04-09 01:53:50.238769 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-09 01:54:08.803910 | orchestrator | changed: [testbed-manager] 2026-04-09 01:54:08.804026 | orchestrator | 2026-04-09 01:54:08.804044 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-09 01:54:08.859240 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:54:08.859431 | orchestrator | 2026-04-09 01:54:08.859471 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-09 01:54:08.859479 | orchestrator | 2026-04-09 01:54:08.859485 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 01:54:10.844603 | orchestrator | ok: [testbed-manager] 2026-04-09 01:54:10.844707 | orchestrator | 2026-04-09 01:54:10.844726 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-09 01:54:10.979642 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-09 01:54:10.979743 | orchestrator | 2026-04-09 01:54:10.979759 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-09 01:54:11.055943 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 01:54:11.056083 | orchestrator | 2026-04-09 01:54:11.056111 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-09 01:54:13.834406 | orchestrator | ok: [testbed-manager] 2026-04-09 01:54:13.834513 | orchestrator | 2026-04-09 01:54:13.834532 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-09 01:54:13.887614 | orchestrator | ok: [testbed-manager] 2026-04-09 01:54:13.887723 | orchestrator | 2026-04-09 01:54:13.887741 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-09 01:54:14.050304 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-09 01:54:14.050408 | orchestrator | 2026-04-09 01:54:14.050426 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-09 01:54:17.163503 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-09 01:54:17.163660 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-09 01:54:17.163676 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-09 01:54:17.163688 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-09 01:54:17.163699 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-09 01:54:17.163711 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-09 01:54:17.163722 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-09 01:54:17.163733 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-09 01:54:17.163744 | orchestrator | 2026-04-09 01:54:17.163757 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-09 01:54:17.845636 | orchestrator | changed: [testbed-manager] 2026-04-09 01:54:17.845760 | orchestrator | 2026-04-09 01:54:17.845785 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-09 01:54:18.533097 | orchestrator | changed: [testbed-manager] 2026-04-09 01:54:18.533198 | orchestrator | 2026-04-09 01:54:18.533210 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-09 01:54:18.619791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-09 01:54:18.619918 | orchestrator | 2026-04-09 01:54:18.619940 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-09 01:54:19.938452 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-09 01:54:19.938625 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-09 01:54:19.938644 | orchestrator | 2026-04-09 01:54:19.938654 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-09 01:54:20.647500 | orchestrator | changed: [testbed-manager] 2026-04-09 01:54:20.647732 | orchestrator | 2026-04-09 01:54:20.647764 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-09 01:54:20.708680 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:54:20.708775 | orchestrator | 2026-04-09 01:54:20.708787 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-09 01:54:20.789403 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-09 01:54:20.789473 | orchestrator | 2026-04-09 01:54:20.789481 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-09 01:54:21.470806 | orchestrator | changed: [testbed-manager] 2026-04-09 01:54:21.470933 | orchestrator | 2026-04-09 01:54:21.470958 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-09 01:54:21.549853 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-09 01:54:21.549953 | orchestrator | 2026-04-09 01:54:21.549969 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-09 01:54:23.082289 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-09 01:54:23.082394 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-09 01:54:23.082410 | orchestrator | changed: [testbed-manager] 2026-04-09 01:54:23.082424 | orchestrator | 2026-04-09 01:54:23.082436 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-09 01:54:23.793723 | orchestrator | changed: [testbed-manager] 2026-04-09 01:54:23.793892 | orchestrator | 2026-04-09 01:54:23.793925 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-09 01:54:23.859726 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:54:23.859836 | orchestrator | 2026-04-09 01:54:23.859856 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-09 01:54:23.996776 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-09 01:54:23.996897 | orchestrator | 2026-04-09 01:54:23.996914 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-09 01:54:24.617786 | orchestrator | changed: [testbed-manager] 2026-04-09 01:54:24.617911 | orchestrator | 2026-04-09 01:54:24.617933 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-09 01:54:25.061809 | orchestrator | changed: [testbed-manager] 2026-04-09 01:54:25.061911 | orchestrator | 2026-04-09 01:54:25.061953 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-09 01:54:26.475153 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-09 01:54:26.475257 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-09 01:54:26.475272 | orchestrator | 2026-04-09 01:54:26.475286 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-09 01:54:27.177822 | orchestrator | changed: [testbed-manager] 2026-04-09 01:54:27.177928 | orchestrator | 2026-04-09 01:54:27.177946 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-09 01:54:27.574632 | orchestrator | ok: [testbed-manager] 2026-04-09 01:54:27.574735 | orchestrator | 2026-04-09 01:54:27.574752 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-09 01:54:27.972310 | orchestrator | changed: [testbed-manager] 2026-04-09 01:54:27.972414 | orchestrator | 2026-04-09 01:54:27.972446 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-09 01:54:28.024949 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:54:28.025040 | orchestrator | 2026-04-09 01:54:28.025057 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-09 01:54:28.103544 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-09 01:54:28.103857 | orchestrator | 2026-04-09 01:54:28.103877 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-09 01:54:28.165613 | orchestrator | ok: [testbed-manager] 2026-04-09 01:54:28.165708 | orchestrator | 2026-04-09 01:54:28.165722 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-09 01:54:30.462294 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-09 01:54:30.462376 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-09 01:54:30.462386 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-09 01:54:30.462392 | orchestrator | 2026-04-09 01:54:30.462399 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-09 01:54:31.227287 | orchestrator | changed: [testbed-manager] 2026-04-09 01:54:31.227371 | orchestrator | 2026-04-09 01:54:31.227382 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-09 01:54:32.023051 | orchestrator | changed: [testbed-manager] 2026-04-09 01:54:32.023143 | orchestrator | 2026-04-09 01:54:32.023156 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-09 01:54:32.780114 | orchestrator | changed: [testbed-manager] 2026-04-09 01:54:32.780214 | orchestrator | 2026-04-09 01:54:32.780231 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-09 01:54:32.855915 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-09 01:54:32.855990 | orchestrator | 2026-04-09 01:54:32.856001 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-09 01:54:32.911748 | orchestrator | ok: [testbed-manager] 2026-04-09 01:54:32.911874 | orchestrator | 2026-04-09 01:54:32.911897 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-09 01:54:33.676298 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-09 01:54:33.676404 | orchestrator | 2026-04-09 01:54:33.676428 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-09 01:54:33.771025 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-09 01:54:33.771132 | orchestrator | 2026-04-09 01:54:33.771150 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-09 01:54:34.557216 | orchestrator | changed: [testbed-manager] 2026-04-09 01:54:34.557303 | orchestrator | 2026-04-09 01:54:34.557315 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-09 01:54:35.267732 | orchestrator | ok: [testbed-manager] 2026-04-09 01:54:35.267824 | orchestrator | 2026-04-09 01:54:35.267833 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-09 01:54:35.325513 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:54:35.325661 | orchestrator | 2026-04-09 01:54:35.325672 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-09 01:54:35.394831 | orchestrator | ok: [testbed-manager] 2026-04-09 01:54:35.394901 | orchestrator | 2026-04-09 01:54:35.394908 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-09 01:54:36.285337 | orchestrator | changed: [testbed-manager] 2026-04-09 01:54:36.285451 | orchestrator | 2026-04-09 01:54:36.285469 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-09 01:55:55.691715 | orchestrator | changed: [testbed-manager] 2026-04-09 01:55:55.691833 | orchestrator | 2026-04-09 01:55:55.691851 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-09 01:55:56.735015 | orchestrator | ok: [testbed-manager] 2026-04-09 01:55:56.735140 | orchestrator | 2026-04-09 01:55:56.735158 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-09 01:55:56.794729 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:55:56.794854 | orchestrator | 2026-04-09 01:55:56.794871 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-09 01:56:07.027353 | orchestrator | changed: [testbed-manager] 2026-04-09 01:56:07.027477 | orchestrator | 2026-04-09 01:56:07.027504 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-09 01:56:07.136886 | orchestrator | ok: [testbed-manager] 2026-04-09 01:56:07.136975 | orchestrator | 2026-04-09 01:56:07.136986 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-09 01:56:07.136995 | orchestrator | 2026-04-09 01:56:07.137003 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-09 01:56:07.199885 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:56:07.199955 | orchestrator | 2026-04-09 01:56:07.199963 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-09 01:57:07.257378 | orchestrator | Pausing for 60 seconds 2026-04-09 01:57:07.257500 | orchestrator | changed: [testbed-manager] 2026-04-09 01:57:07.257528 | orchestrator | 2026-04-09 01:57:07.257548 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-09 01:57:10.561127 | orchestrator | changed: [testbed-manager] 2026-04-09 01:57:10.561200 | orchestrator | 2026-04-09 01:57:10.561207 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-09 01:58:12.879278 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-09 01:58:12.879389 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-09 01:58:12.879422 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-04-09 01:58:12.879433 | orchestrator | changed: [testbed-manager] 2026-04-09 01:58:12.879446 | orchestrator | 2026-04-09 01:58:12.879457 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-09 01:58:24.999306 | orchestrator | changed: [testbed-manager] 2026-04-09 01:58:24.999402 | orchestrator | 2026-04-09 01:58:24.999415 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-09 01:58:25.097266 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-09 01:58:25.097342 | orchestrator | 2026-04-09 01:58:25.097350 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-09 01:58:25.097357 | orchestrator | 2026-04-09 01:58:25.097363 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-09 01:58:25.157084 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:58:25.157167 | orchestrator | 2026-04-09 01:58:25.157182 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-09 01:58:25.242371 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-09 01:58:25.242468 | orchestrator | 2026-04-09 01:58:25.242483 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-09 01:58:26.077350 | orchestrator | changed: [testbed-manager] 2026-04-09 01:58:26.077454 | orchestrator | 2026-04-09 01:58:26.077472 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-09 01:58:29.595954 | orchestrator | ok: [testbed-manager] 2026-04-09 01:58:29.596071 | orchestrator | 2026-04-09 01:58:29.596090 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-09 01:58:29.667737 | orchestrator | ok: [testbed-manager] => { 2026-04-09 01:58:29.667862 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-09 01:58:29.667883 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-09 01:58:29.667897 | orchestrator | "Checking running containers against expected versions...", 2026-04-09 01:58:29.667910 | orchestrator | "", 2026-04-09 01:58:29.667922 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-09 01:58:29.667934 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-04-09 01:58:29.667946 | orchestrator | " Enabled: true", 2026-04-09 01:58:29.667959 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-04-09 01:58:29.667978 | orchestrator | " Status: ✅ MATCH", 2026-04-09 01:58:29.667997 | orchestrator | "", 2026-04-09 01:58:29.668013 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-09 01:58:29.668069 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-04-09 01:58:29.668094 | orchestrator | " Enabled: true", 2026-04-09 01:58:29.668112 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-04-09 01:58:29.668129 | orchestrator | " Status: ✅ MATCH", 2026-04-09 01:58:29.668148 | orchestrator | "", 2026-04-09 01:58:29.668166 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-09 01:58:29.668185 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-04-09 01:58:29.668204 | orchestrator | " Enabled: true", 2026-04-09 01:58:29.668223 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-04-09 01:58:29.668242 | orchestrator | " Status: ✅ MATCH", 2026-04-09 01:58:29.668255 | orchestrator | "", 2026-04-09 01:58:29.668267 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-09 01:58:29.668278 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-04-09 01:58:29.668289 | orchestrator | " Enabled: true", 2026-04-09 01:58:29.668300 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-04-09 01:58:29.668311 | orchestrator | " Status: ✅ MATCH", 2026-04-09 01:58:29.668322 | orchestrator | "", 2026-04-09 01:58:29.668336 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-09 01:58:29.668347 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-04-09 01:58:29.668358 | orchestrator | " Enabled: true", 2026-04-09 01:58:29.668369 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-04-09 01:58:29.668380 | orchestrator | " Status: ✅ MATCH", 2026-04-09 01:58:29.668391 | orchestrator | "", 2026-04-09 01:58:29.668402 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-09 01:58:29.668412 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-09 01:58:29.668423 | orchestrator | " Enabled: true", 2026-04-09 01:58:29.668434 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-09 01:58:29.668445 | orchestrator | " Status: ✅ MATCH", 2026-04-09 01:58:29.668456 | orchestrator | "", 2026-04-09 01:58:29.668467 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-09 01:58:29.668478 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-09 01:58:29.668489 | orchestrator | " Enabled: true", 2026-04-09 01:58:29.668500 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-09 01:58:29.668518 | orchestrator | " Status: ✅ MATCH", 2026-04-09 01:58:29.668536 | orchestrator | "", 2026-04-09 01:58:29.668554 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-09 01:58:29.668573 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-09 01:58:29.668592 | orchestrator | " Enabled: true", 2026-04-09 01:58:29.668610 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-09 01:58:29.668629 | orchestrator | " Status: ✅ MATCH", 2026-04-09 01:58:29.668648 | orchestrator | "", 2026-04-09 01:58:29.668667 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-09 01:58:29.668712 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-04-09 01:58:29.668731 | orchestrator | " Enabled: true", 2026-04-09 01:58:29.668748 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-04-09 01:58:29.668764 | orchestrator | " Status: ✅ MATCH", 2026-04-09 01:58:29.668781 | orchestrator | "", 2026-04-09 01:58:29.668798 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-09 01:58:29.668815 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-09 01:58:29.668833 | orchestrator | " Enabled: true", 2026-04-09 01:58:29.668851 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-09 01:58:29.668868 | orchestrator | " Status: ✅ MATCH", 2026-04-09 01:58:29.668887 | orchestrator | "", 2026-04-09 01:58:29.668906 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-09 01:58:29.668943 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-09 01:58:29.668963 | orchestrator | " Enabled: true", 2026-04-09 01:58:29.668981 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-09 01:58:29.669000 | orchestrator | " Status: ✅ MATCH", 2026-04-09 01:58:29.669011 | orchestrator | "", 2026-04-09 01:58:29.669022 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-09 01:58:29.669033 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-09 01:58:29.669044 | orchestrator | " Enabled: true", 2026-04-09 01:58:29.669055 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-09 01:58:29.669066 | orchestrator | " Status: ✅ MATCH", 2026-04-09 01:58:29.669078 | orchestrator | "", 2026-04-09 01:58:29.669089 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-09 01:58:29.669100 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-09 01:58:29.669111 | orchestrator | " Enabled: true", 2026-04-09 01:58:29.669122 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-09 01:58:29.669133 | orchestrator | " Status: ✅ MATCH", 2026-04-09 01:58:29.669144 | orchestrator | "", 2026-04-09 01:58:29.669155 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-09 01:58:29.669166 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-09 01:58:29.669177 | orchestrator | " Enabled: true", 2026-04-09 01:58:29.669188 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-09 01:58:29.669220 | orchestrator | " Status: ✅ MATCH", 2026-04-09 01:58:29.669232 | orchestrator | "", 2026-04-09 01:58:29.669243 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-09 01:58:29.669254 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-09 01:58:29.669275 | orchestrator | " Enabled: true", 2026-04-09 01:58:29.669287 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-09 01:58:29.669298 | orchestrator | " Status: ✅ MATCH", 2026-04-09 01:58:29.669309 | orchestrator | "", 2026-04-09 01:58:29.669320 | orchestrator | "=== Summary ===", 2026-04-09 01:58:29.669331 | orchestrator | "Errors (version mismatches): 0", 2026-04-09 01:58:29.669342 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-09 01:58:29.669353 | orchestrator | "", 2026-04-09 01:58:29.669364 | orchestrator | "✅ All running containers match expected versions!" 2026-04-09 01:58:29.669391 | orchestrator | ] 2026-04-09 01:58:29.669413 | orchestrator | } 2026-04-09 01:58:29.669425 | orchestrator | 2026-04-09 01:58:29.669437 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-09 01:58:29.729219 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:58:29.729362 | orchestrator | 2026-04-09 01:58:29.729382 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:58:29.729395 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-09 01:58:29.729405 | orchestrator | 2026-04-09 01:58:29.843343 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-09 01:58:29.843461 | orchestrator | + deactivate 2026-04-09 01:58:29.843487 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-09 01:58:29.843509 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-09 01:58:29.843531 | orchestrator | + export PATH 2026-04-09 01:58:29.843549 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-09 01:58:29.843567 | orchestrator | + '[' -n '' ']' 2026-04-09 01:58:29.843608 | orchestrator | + hash -r 2026-04-09 01:58:29.843626 | orchestrator | + '[' -n '' ']' 2026-04-09 01:58:29.843646 | orchestrator | + unset VIRTUAL_ENV 2026-04-09 01:58:29.843663 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-09 01:58:29.843740 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-09 01:58:29.843762 | orchestrator | + unset -f deactivate 2026-04-09 01:58:29.843782 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-09 01:58:29.851589 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-09 01:58:29.851673 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-09 01:58:29.851743 | orchestrator | + local max_attempts=60 2026-04-09 01:58:29.851758 | orchestrator | + local name=ceph-ansible 2026-04-09 01:58:29.851769 | orchestrator | + local attempt_num=1 2026-04-09 01:58:29.852808 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 01:58:29.890850 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 01:58:29.890950 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-09 01:58:29.890966 | orchestrator | + local max_attempts=60 2026-04-09 01:58:29.890980 | orchestrator | + local name=kolla-ansible 2026-04-09 01:58:29.890992 | orchestrator | + local attempt_num=1 2026-04-09 01:58:29.891761 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-09 01:58:29.927318 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 01:58:29.927434 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-09 01:58:29.927458 | orchestrator | + local max_attempts=60 2026-04-09 01:58:29.927478 | orchestrator | + local name=osism-ansible 2026-04-09 01:58:29.927497 | orchestrator | + local attempt_num=1 2026-04-09 01:58:29.927515 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-09 01:58:29.961149 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 01:58:29.961240 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-09 01:58:29.961255 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-09 01:58:30.693015 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-09 01:58:30.905147 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-09 01:58:30.905233 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-04-09 01:58:30.905246 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-04-09 01:58:30.905256 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-04-09 01:58:30.905266 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-04-09 01:58:30.905295 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-04-09 01:58:30.905303 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-04-09 01:58:30.905312 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-04-09 01:58:30.905320 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-04-09 01:58:30.905328 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-04-09 01:58:30.905336 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-04-09 01:58:30.905344 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-04-09 01:58:30.905352 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-04-09 01:58:30.905380 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-04-09 01:58:30.905388 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-04-09 01:58:30.905397 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-04-09 01:58:30.911252 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-09 01:58:30.977230 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-09 01:58:30.977332 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-09 01:58:30.980561 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-09 01:58:43.467475 | orchestrator | 2026-04-09 01:58:43 | INFO  | Task 68ec5d32-2945-4950-a371-3e8f15273174 (resolvconf) was prepared for execution. 2026-04-09 01:58:43.467619 | orchestrator | 2026-04-09 01:58:43 | INFO  | It takes a moment until task 68ec5d32-2945-4950-a371-3e8f15273174 (resolvconf) has been started and output is visible here. 2026-04-09 01:58:58.433609 | orchestrator | 2026-04-09 01:58:58.433747 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-09 01:58:58.433768 | orchestrator | 2026-04-09 01:58:58.433776 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 01:58:58.433783 | orchestrator | Thursday 09 April 2026 01:58:47 +0000 (0:00:00.163) 0:00:00.163 ******** 2026-04-09 01:58:58.433790 | orchestrator | ok: [testbed-manager] 2026-04-09 01:58:58.433798 | orchestrator | 2026-04-09 01:58:58.433805 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-09 01:58:58.433813 | orchestrator | Thursday 09 April 2026 01:58:51 +0000 (0:00:04.007) 0:00:04.171 ******** 2026-04-09 01:58:58.433820 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:58:58.433827 | orchestrator | 2026-04-09 01:58:58.433834 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-09 01:58:58.433840 | orchestrator | Thursday 09 April 2026 01:58:51 +0000 (0:00:00.062) 0:00:04.233 ******** 2026-04-09 01:58:58.433847 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-09 01:58:58.433855 | orchestrator | 2026-04-09 01:58:58.433861 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-09 01:58:58.433867 | orchestrator | Thursday 09 April 2026 01:58:51 +0000 (0:00:00.095) 0:00:04.329 ******** 2026-04-09 01:58:58.433891 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 01:58:58.433898 | orchestrator | 2026-04-09 01:58:58.433904 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-09 01:58:58.433910 | orchestrator | Thursday 09 April 2026 01:58:52 +0000 (0:00:00.109) 0:00:04.439 ******** 2026-04-09 01:58:58.433917 | orchestrator | ok: [testbed-manager] 2026-04-09 01:58:58.433923 | orchestrator | 2026-04-09 01:58:58.433930 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-09 01:58:58.433936 | orchestrator | Thursday 09 April 2026 01:58:53 +0000 (0:00:01.260) 0:00:05.699 ******** 2026-04-09 01:58:58.433942 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:58:58.433949 | orchestrator | 2026-04-09 01:58:58.433955 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-09 01:58:58.433962 | orchestrator | Thursday 09 April 2026 01:58:53 +0000 (0:00:00.061) 0:00:05.761 ******** 2026-04-09 01:58:58.433989 | orchestrator | ok: [testbed-manager] 2026-04-09 01:58:58.433996 | orchestrator | 2026-04-09 01:58:58.434002 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-09 01:58:58.434009 | orchestrator | Thursday 09 April 2026 01:58:53 +0000 (0:00:00.534) 0:00:06.296 ******** 2026-04-09 01:58:58.434055 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:58:58.434062 | orchestrator | 2026-04-09 01:58:58.434068 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-09 01:58:58.434076 | orchestrator | Thursday 09 April 2026 01:58:54 +0000 (0:00:00.085) 0:00:06.381 ******** 2026-04-09 01:58:58.434082 | orchestrator | changed: [testbed-manager] 2026-04-09 01:58:58.434088 | orchestrator | 2026-04-09 01:58:58.434098 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-09 01:58:58.434110 | orchestrator | Thursday 09 April 2026 01:58:54 +0000 (0:00:00.613) 0:00:06.994 ******** 2026-04-09 01:58:58.434120 | orchestrator | changed: [testbed-manager] 2026-04-09 01:58:58.434130 | orchestrator | 2026-04-09 01:58:58.434140 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-09 01:58:58.434151 | orchestrator | Thursday 09 April 2026 01:58:55 +0000 (0:00:01.153) 0:00:08.148 ******** 2026-04-09 01:58:58.434161 | orchestrator | ok: [testbed-manager] 2026-04-09 01:58:58.434171 | orchestrator | 2026-04-09 01:58:58.434181 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-09 01:58:58.434191 | orchestrator | Thursday 09 April 2026 01:58:56 +0000 (0:00:01.095) 0:00:09.243 ******** 2026-04-09 01:58:58.434201 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-09 01:58:58.434211 | orchestrator | 2026-04-09 01:58:58.434220 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-09 01:58:58.434231 | orchestrator | Thursday 09 April 2026 01:58:56 +0000 (0:00:00.084) 0:00:09.328 ******** 2026-04-09 01:58:58.434241 | orchestrator | changed: [testbed-manager] 2026-04-09 01:58:58.434252 | orchestrator | 2026-04-09 01:58:58.434261 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:58:58.434271 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 01:58:58.434281 | orchestrator | 2026-04-09 01:58:58.434290 | orchestrator | 2026-04-09 01:58:58.434299 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:58:58.434309 | orchestrator | Thursday 09 April 2026 01:58:58 +0000 (0:00:01.194) 0:00:10.522 ******** 2026-04-09 01:58:58.434318 | orchestrator | =============================================================================== 2026-04-09 01:58:58.434327 | orchestrator | Gathering Facts --------------------------------------------------------- 4.01s 2026-04-09 01:58:58.434338 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.26s 2026-04-09 01:58:58.434349 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.19s 2026-04-09 01:58:58.434359 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.15s 2026-04-09 01:58:58.434368 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.10s 2026-04-09 01:58:58.434379 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.61s 2026-04-09 01:58:58.434411 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.53s 2026-04-09 01:58:58.434423 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.11s 2026-04-09 01:58:58.434434 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.10s 2026-04-09 01:58:58.434445 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-04-09 01:58:58.434456 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-04-09 01:58:58.434466 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-04-09 01:58:58.434488 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-04-09 01:58:58.787153 | orchestrator | + osism apply sshconfig 2026-04-09 01:59:10.917373 | orchestrator | 2026-04-09 01:59:10 | INFO  | Task 4dfbb1e6-450e-4f12-95cf-a18f8e9448a0 (sshconfig) was prepared for execution. 2026-04-09 01:59:10.917458 | orchestrator | 2026-04-09 01:59:10 | INFO  | It takes a moment until task 4dfbb1e6-450e-4f12-95cf-a18f8e9448a0 (sshconfig) has been started and output is visible here. 2026-04-09 01:59:23.606563 | orchestrator | 2026-04-09 01:59:23.606679 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-09 01:59:23.606695 | orchestrator | 2026-04-09 01:59:23.606783 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-09 01:59:23.606802 | orchestrator | Thursday 09 April 2026 01:59:15 +0000 (0:00:00.186) 0:00:00.186 ******** 2026-04-09 01:59:23.606830 | orchestrator | ok: [testbed-manager] 2026-04-09 01:59:23.606847 | orchestrator | 2026-04-09 01:59:23.606863 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-09 01:59:23.606880 | orchestrator | Thursday 09 April 2026 01:59:15 +0000 (0:00:00.607) 0:00:00.793 ******** 2026-04-09 01:59:23.606896 | orchestrator | changed: [testbed-manager] 2026-04-09 01:59:23.606914 | orchestrator | 2026-04-09 01:59:23.606929 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-09 01:59:23.606947 | orchestrator | Thursday 09 April 2026 01:59:16 +0000 (0:00:00.555) 0:00:01.349 ******** 2026-04-09 01:59:23.606963 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-09 01:59:23.606980 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-09 01:59:23.606997 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-09 01:59:23.607014 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-09 01:59:23.607031 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-09 01:59:23.607049 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-09 01:59:23.607066 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-09 01:59:23.607083 | orchestrator | 2026-04-09 01:59:23.607101 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-09 01:59:23.607118 | orchestrator | Thursday 09 April 2026 01:59:22 +0000 (0:00:06.127) 0:00:07.477 ******** 2026-04-09 01:59:23.607134 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:59:23.607151 | orchestrator | 2026-04-09 01:59:23.607167 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-09 01:59:23.607184 | orchestrator | Thursday 09 April 2026 01:59:22 +0000 (0:00:00.086) 0:00:07.564 ******** 2026-04-09 01:59:23.607200 | orchestrator | changed: [testbed-manager] 2026-04-09 01:59:23.607217 | orchestrator | 2026-04-09 01:59:23.607233 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:59:23.607250 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 01:59:23.607267 | orchestrator | 2026-04-09 01:59:23.607303 | orchestrator | 2026-04-09 01:59:23.607333 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:59:23.607345 | orchestrator | Thursday 09 April 2026 01:59:23 +0000 (0:00:00.590) 0:00:08.154 ******** 2026-04-09 01:59:23.607355 | orchestrator | =============================================================================== 2026-04-09 01:59:23.607365 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.13s 2026-04-09 01:59:23.607375 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.61s 2026-04-09 01:59:23.607384 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.59s 2026-04-09 01:59:23.607394 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.56s 2026-04-09 01:59:23.607428 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2026-04-09 01:59:23.976489 | orchestrator | + osism apply known-hosts 2026-04-09 01:59:36.294665 | orchestrator | 2026-04-09 01:59:36 | INFO  | Task 0b2df5c0-5f8e-401d-ae04-cad68a1ed6dc (known-hosts) was prepared for execution. 2026-04-09 01:59:36.294849 | orchestrator | 2026-04-09 01:59:36 | INFO  | It takes a moment until task 0b2df5c0-5f8e-401d-ae04-cad68a1ed6dc (known-hosts) has been started and output is visible here. 2026-04-09 01:59:54.464246 | orchestrator | 2026-04-09 01:59:54.464378 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-09 01:59:54.464404 | orchestrator | 2026-04-09 01:59:54.464422 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-09 01:59:54.464441 | orchestrator | Thursday 09 April 2026 01:59:40 +0000 (0:00:00.189) 0:00:00.189 ******** 2026-04-09 01:59:54.464458 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-09 01:59:54.464476 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-09 01:59:54.464493 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-09 01:59:54.464508 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-09 01:59:54.464519 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-09 01:59:54.464528 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-09 01:59:54.464538 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-09 01:59:54.464548 | orchestrator | 2026-04-09 01:59:54.464558 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-09 01:59:54.464569 | orchestrator | Thursday 09 April 2026 01:59:47 +0000 (0:00:06.262) 0:00:06.452 ******** 2026-04-09 01:59:54.464581 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-09 01:59:54.464593 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-09 01:59:54.464603 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-09 01:59:54.464613 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-09 01:59:54.464623 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-09 01:59:54.464643 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-09 01:59:54.464653 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-09 01:59:54.464663 | orchestrator | 2026-04-09 01:59:54.464673 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 01:59:54.464683 | orchestrator | Thursday 09 April 2026 01:59:47 +0000 (0:00:00.172) 0:00:06.624 ******** 2026-04-09 01:59:54.464693 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFx+9ygPhSGZA/4qARqrsmdLJD0SMPqLESgYr3Gpj45WGcXTNL56+rA+fvw5VTOg/bB3pmLzGYA3tbdI6IG8CBo=) 2026-04-09 01:59:54.464712 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDc+XIYCOz+OZvkKC67YcLmWjvkbx2Y6ZFNtr0Pmy6qV2sWvGSW3hMnEhSR3V8c4Co5eGBkVBd4yk4HfMj6xvktmsX0FUmYs2lMZcWO9wMqTPrNvwHk3e+1Q0+zLft606kjX72sZBn9n4hyCwIKw0nQ/Pk0m54Fla2jj6h/+Nn4A/BNZzXFvVyF4AY49pHH4UIOOyQAtyJcsLrvKLN4z5H+02j9DprwgrReURAPXhZQBaSpzT9oOL2CB6m9yawz9NGQMhq7lBP8QNhWPXuc6GKQKM/wHHhpEBzkE/Ybca0MDghFU9yGHTmv3vFwpAxnESBvlez6RDpvP1YvV7yewaHeMCmHPVFBVwRMJNKkYM/Y6e4Hp87UV2KfgQ3i6TqX+Onzt2Jame3Y5RUKoHZIb09KJt2pI204jXTJyunTf0Kjh4mAy7PRCbIme6QPFglKu4oYPFqootM6G8073+6w0+3sYLBMKnf1yAB1spFNY/aJhXns8DZzuKMg1lY5B9s6dX8=) 2026-04-09 01:59:54.464806 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFqJoL83l5o7AV3sM5snIbqstChaANUKcG3CEo6b+8JL) 2026-04-09 01:59:54.464819 | orchestrator | 2026-04-09 01:59:54.464831 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 01:59:54.464843 | orchestrator | Thursday 09 April 2026 01:59:48 +0000 (0:00:01.302) 0:00:07.927 ******** 2026-04-09 01:59:54.464875 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDE0eF+ghg1KRw2N+UytgA3rkEP9mR+P0153H/mHL3SzKnsvdBKihdH8dK5FQ+ejyaDh2CQK4z/Wb/Qm+Wfo0Try00IidHSPxvtmeNPe6zkmy5UJf8cCORIR7kasF0hGsqq+20K/7uqGf808uC0qp9T6Ms+RFDs0XqnMpEe/psWcARM2JyNPDetmePfeZquYSI+FtjiqoyezeDuxdG8xqvs1YabaanmkMoGmzq9ORGAa7WuwTBqnNdvf5fnVhszxe3eJuKDVgPyrNhZlqPs3zmqa5b0YR3/Q9G6tbXt5ibnhTTmfqOggYZhRJjGTOI/QomsJFaDxJA+WJGfjmmCppoYbWT8G9vmd5nTV7wALSiu5TgynUHObiGWyE6k5YvMWoG0LMHv14V+obbbJ8eCESQcwvei9R+z01vOrDD0SaMBBSijFT6otFO4gBYtoURo0UroPeKCLkJmE3VtTZ8is5s6fjYDu20QehoEiHPR5Xr6Z9qAwAznw5XQGtuMvEUiNo8=) 2026-04-09 01:59:54.464888 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJu4akp/t1GgKKGrzWBTJeGVpA0rq9D6HuTaufA0GO787V4KIBZbUppmKsA4/mkvS4ALSztJTONpUk2uAa5m6Ic=) 2026-04-09 01:59:54.464900 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBeAh7WTgb9+wU9mU4XUgNVPidkmH0ctXq8OFz90koqr) 2026-04-09 01:59:54.464911 | orchestrator | 2026-04-09 01:59:54.464922 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 01:59:54.464933 | orchestrator | Thursday 09 April 2026 01:59:49 +0000 (0:00:01.154) 0:00:09.082 ******** 2026-04-09 01:59:54.464945 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIWhV/XOmJcg3+793+VWnrJaiZoSkYXh3r+yJV+Q0mfTvRaVv3J4lajPQhTfzl00aaKcfL570LoXu8udnPwmRPcD6yOtAVXctpiJ4zV8XM5zViIQ7CXrz+rUmmjZIDvVn/mjiMphR3iUrXDif2UtbjLkPJVxbZ8J+ETOhXYoRmSr7Wtqf4IPntlwcBuNSEDxtjsSdBzkL/dwfmjdjRLT9jIatHfwx18BtG3/uNW6QjwXldn4zH8VceZ7zuFPhB15J4ao8bUnQ6r6ae4B1Wv3jsP3XHghtTKG7Wz4VuQ97j1KqxT++HiZiZmoU2Ypakjbhk0uc42znU6rl/Czoj0borGl0DnRk+ez/VThqfjXwqjxaswwHj+EzxUvzK5Chrb/FRo6ZDqDn/gmQaI73RjGCYS0bZnoQ/xLlxlBeCtstgHdWDrHS8TI26dxeKNoFjJZk7/0d0LxDdyz1fWMDxBOcG8tIwfXDf+dy5ypwmfSXHzd9oZZqEgxUDBkXb5PGq020=) 2026-04-09 01:59:54.464957 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNtxLSSKJk6BSc/F6eGYUQoG/z7mTiwafPGlP2eCJtMRC2+LrM/jPXAsZ7UAdiGrKXrjiGqXxx3O8EuApKPtdUk=) 2026-04-09 01:59:54.464969 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGJhnWHMuV9Yblo3BvSq/gnL4Qpj4qINoYX4JvYG+gDD) 2026-04-09 01:59:54.464980 | orchestrator | 2026-04-09 01:59:54.464991 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 01:59:54.465002 | orchestrator | Thursday 09 April 2026 01:59:50 +0000 (0:00:01.166) 0:00:10.248 ******** 2026-04-09 01:59:54.465013 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIR02grlsyvgd0EYTFzLDceG8jqKFJ1spTMLBjmDW5hg) 2026-04-09 01:59:54.465025 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuH7NBLVxzxpwe1t/lSnlq2dpzcKj1VWuxTNo7Slpf4KWHKKmVkhyj8yB02ixEvaafHtzfmz4yvOfUUP5CKBPpYhzTvOvenMkkMe9OfaMaULCz7ZxSPUhm3DA98czY5IFi9/cU9uj1B2UKGkwi5jwmb7eF7KFlK6kf1jEUAB8ut56orAY6ULhU8b9k1ihWbFYpmXHXkSKMFsV14xeMvCQli9IDXKItZlgLegP7LZNH9aR3TkSGIXCFssLmIWauNSMFGyspFbojBe6WdkGSojZEgZo89Gr3oq+6/ShJIA9JVLwEppeXeLZ7PJoIpyCdlNTv1hCwVS1jDjVO0X7UcpMH8T7CKiDTWWenxTgpP/ddb0FPuR300cylwSCrxnSzyj/LujeMUERqAYEb+BeQom06NGMsdBHb3JnCLu6rYfVI3oWzdJ6oyhRfApyW2ct00X87SdBX1uXvOeW4CsQxKZaluPXl16jhXMJRD2cHPakUc9hjvQEjhrjnYMqinr19UtM=) 2026-04-09 01:59:54.465052 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJTfnWfiw8sYBn62jiRVAmNYu36IpTikoJBzeD+29GRv1akYvC5jrA6+3zsS9rCkzIGVsmsDjVAQPElO950DMmg=) 2026-04-09 01:59:54.465079 | orchestrator | 2026-04-09 01:59:54.465101 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 01:59:54.465117 | orchestrator | Thursday 09 April 2026 01:59:52 +0000 (0:00:01.160) 0:00:11.409 ******** 2026-04-09 01:59:54.465216 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCW9EgeSvfDqWe/aTsIegyThQDvETMHXm/OdK0o2RgFGWDFzAnFgt1kcSC5QSpr8WW1cHRqkJq+XMM1rq8r5VB6kVMPx2QwwN/5BeJSzfabc2K7PxgtH2E537D32L8ErLBNpBvZEoI7zOoC5O0U8baYBYeVkc99cNSrd/0PtBf55nKQKpJG6viXmQnw6IB37m6b3vB68o/fVvEMOMNk4hMO71Ls83zRkEhQi3HjHgQwpvUXS3qjEGRZoF3uh66dwTVRagluB9LHy8cFw+xP1RnjcfPxYPASu6oKOyhpoqC97VZ70xcKnw3eIlO01kHQ32hGQeRa3H+xTdbYH6nFsQHzxeE2fMRdsUEi2t4l1H9hv1HIkezJsBfeaE+lX7eL7eFi82ZaIO+HTNa1IGmRK7YgmrMRP2OEu90NioSAPaabzoIN93f070VRgEZKDZaZ3KFCVztfErurNjEoRveg0XpPOQNK6ZhpswzX+3R9KE1GrabzR8aEFELVND/buf8P8K8=) 2026-04-09 01:59:54.465237 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHRoUw0vQtQPBaSllApOBsFgqaZfQNrA7dehNS0Of/xljT+aia7peHlzNdXn73PGl7ofgPGWLewnLWFwKsIU7ho=) 2026-04-09 01:59:54.465253 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICSZq4lNBJrZg16sOIHtoZwp3TI6FM/oCCoi6bnLYtwo) 2026-04-09 01:59:54.465269 | orchestrator | 2026-04-09 01:59:54.465287 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 01:59:54.465303 | orchestrator | Thursday 09 April 2026 01:59:53 +0000 (0:00:01.184) 0:00:12.594 ******** 2026-04-09 01:59:54.465337 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGBu73rJlueFP7yIaPA69UK+1jp4neQxt57KkP6IdGHu8LZQC7lZn8qpeVnTR0t7GWl++kGIk/TZj19/3YgyPQA=) 2026-04-09 02:00:06.275266 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAeii96J9U8lujibMpOlMgcekjE+7UQ4aDEFg7NjD93xi6MyWd7PTYO1m/5ipKoOEG1sQxazW22PgOjzPOsGY8dQUhHdebgpNmSub+2EhljRD8ESHsx1+9aBzjvl53M2wAOaqan6rKFZiUbJQncNxRSGpfj5CAEGD2cdHOZSLOuYxgbJAjE+slxYGwhe256GGDA5ieSvkj9T9auYxJ8DYVUWIgPCcjXxtwjOwjMXb9HxA5PzhvLVfQkn5koEvtsLGBjV367GYTdLTB7DeYxghzbpTN1D6nUqrVZvST7uJvIFzHppe4fNOan9V51rfLyr4A2QYBV0uPpdUGTOxW03neVAAKY503IkURcGowFiJpQrHkv4PqSoShhb4I+Ghv9pT1IUEl140Gpl4O35eHWkohCVXDmZZAYAGFLcG5tB9VcN5TOxeAA9hSYteE55ZHoImIHCs3ZpVHZ0qlBumgO+JHa/Oy3rTyzsu4HLgpipQVaRYCDadawEILrbDtO+du0o8=) 2026-04-09 02:00:06.275361 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMTIyiYsq9nivsmwM0QqYJM2+ZPjF41Yn75q9vDJ9X+2) 2026-04-09 02:00:06.275370 | orchestrator | 2026-04-09 02:00:06.275375 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 02:00:06.275381 | orchestrator | Thursday 09 April 2026 01:59:54 +0000 (0:00:01.151) 0:00:13.745 ******** 2026-04-09 02:00:06.275385 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMzSp7aP4+hfdtNRNoKlj9VfwNlgfUNMzFX8F7bFo9fN) 2026-04-09 02:00:06.275390 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCPWPY9VlSWUhWC/ZP/7D3AxZpuor6+RiNKVN8p+xsyxBNuQVjw6/PYhVIUXX78ZQtrrsD9FuXNQXPDN4qNbeuuOkaDjMuYXeF3CKUWEiJhCRIkyj0/6HnHzeKfttc+k+RMIt56c/N2HmeMklkdYgytGXAwcOR2HxuuMnIZjWDqon6UQNFwrIuze0PkbDECFX0tkVQwCstDNYLJsrlZTzQZs+UjbX/bxsPTDB3WRtSt3YQj48lI9UVZGcv3KfVDJAxQnH8i/CWoAhiwOln9mKVPv068Dwig5S3R5Aue/3PFhi7tX/X8MmXncLY0BzwMyiS7mcJBpFDP0aIlxduRMpvKY6wnbbzNwjJqNLGcTFak2U1ASbqDnGAmZ8yXAm72x7DXohlyXqD9cp/C/nkVm2FkUPXxER7wYnBJjTc2hI+T1ONvJ+eEKG2VtOUZiyIniXi2gWowMldObFQaL1zUWkGy6Ezn9i2D1sPre50QiOiI+XVQsouXC+Vcm2CwFua2gus=) 2026-04-09 02:00:06.275409 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJnrKLMwUkcZRdLvRnsAAg3PiZx/H/AO6IMgIkAUwG6Cz63tF8gR1Tjn32rrHN6E+ZDZ8H1Dq+bQlnxMc/r1AwE=) 2026-04-09 02:00:06.275415 | orchestrator | 2026-04-09 02:00:06.275419 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-09 02:00:06.275424 | orchestrator | Thursday 09 April 2026 01:59:55 +0000 (0:00:01.190) 0:00:14.936 ******** 2026-04-09 02:00:06.275428 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-09 02:00:06.275432 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-09 02:00:06.275436 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-09 02:00:06.275440 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-09 02:00:06.275444 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-09 02:00:06.275448 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-09 02:00:06.275451 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-09 02:00:06.275455 | orchestrator | 2026-04-09 02:00:06.275459 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-09 02:00:06.275464 | orchestrator | Thursday 09 April 2026 02:00:01 +0000 (0:00:05.623) 0:00:20.559 ******** 2026-04-09 02:00:06.275469 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-09 02:00:06.275475 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-09 02:00:06.275479 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-09 02:00:06.275483 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-09 02:00:06.275487 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-09 02:00:06.275490 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-09 02:00:06.275494 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-09 02:00:06.275498 | orchestrator | 2026-04-09 02:00:06.275512 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 02:00:06.275516 | orchestrator | Thursday 09 April 2026 02:00:01 +0000 (0:00:00.188) 0:00:20.747 ******** 2026-04-09 02:00:06.275520 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDc+XIYCOz+OZvkKC67YcLmWjvkbx2Y6ZFNtr0Pmy6qV2sWvGSW3hMnEhSR3V8c4Co5eGBkVBd4yk4HfMj6xvktmsX0FUmYs2lMZcWO9wMqTPrNvwHk3e+1Q0+zLft606kjX72sZBn9n4hyCwIKw0nQ/Pk0m54Fla2jj6h/+Nn4A/BNZzXFvVyF4AY49pHH4UIOOyQAtyJcsLrvKLN4z5H+02j9DprwgrReURAPXhZQBaSpzT9oOL2CB6m9yawz9NGQMhq7lBP8QNhWPXuc6GKQKM/wHHhpEBzkE/Ybca0MDghFU9yGHTmv3vFwpAxnESBvlez6RDpvP1YvV7yewaHeMCmHPVFBVwRMJNKkYM/Y6e4Hp87UV2KfgQ3i6TqX+Onzt2Jame3Y5RUKoHZIb09KJt2pI204jXTJyunTf0Kjh4mAy7PRCbIme6QPFglKu4oYPFqootM6G8073+6w0+3sYLBMKnf1yAB1spFNY/aJhXns8DZzuKMg1lY5B9s6dX8=) 2026-04-09 02:00:06.275525 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFx+9ygPhSGZA/4qARqrsmdLJD0SMPqLESgYr3Gpj45WGcXTNL56+rA+fvw5VTOg/bB3pmLzGYA3tbdI6IG8CBo=) 2026-04-09 02:00:06.275536 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFqJoL83l5o7AV3sM5snIbqstChaANUKcG3CEo6b+8JL) 2026-04-09 02:00:06.275540 | orchestrator | 2026-04-09 02:00:06.275545 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 02:00:06.275549 | orchestrator | Thursday 09 April 2026 02:00:02 +0000 (0:00:01.172) 0:00:21.919 ******** 2026-04-09 02:00:06.275553 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJu4akp/t1GgKKGrzWBTJeGVpA0rq9D6HuTaufA0GO787V4KIBZbUppmKsA4/mkvS4ALSztJTONpUk2uAa5m6Ic=) 2026-04-09 02:00:06.275557 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDE0eF+ghg1KRw2N+UytgA3rkEP9mR+P0153H/mHL3SzKnsvdBKihdH8dK5FQ+ejyaDh2CQK4z/Wb/Qm+Wfo0Try00IidHSPxvtmeNPe6zkmy5UJf8cCORIR7kasF0hGsqq+20K/7uqGf808uC0qp9T6Ms+RFDs0XqnMpEe/psWcARM2JyNPDetmePfeZquYSI+FtjiqoyezeDuxdG8xqvs1YabaanmkMoGmzq9ORGAa7WuwTBqnNdvf5fnVhszxe3eJuKDVgPyrNhZlqPs3zmqa5b0YR3/Q9G6tbXt5ibnhTTmfqOggYZhRJjGTOI/QomsJFaDxJA+WJGfjmmCppoYbWT8G9vmd5nTV7wALSiu5TgynUHObiGWyE6k5YvMWoG0LMHv14V+obbbJ8eCESQcwvei9R+z01vOrDD0SaMBBSijFT6otFO4gBYtoURo0UroPeKCLkJmE3VtTZ8is5s6fjYDu20QehoEiHPR5Xr6Z9qAwAznw5XQGtuMvEUiNo8=) 2026-04-09 02:00:06.275561 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBeAh7WTgb9+wU9mU4XUgNVPidkmH0ctXq8OFz90koqr) 2026-04-09 02:00:06.275565 | orchestrator | 2026-04-09 02:00:06.275569 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 02:00:06.275573 | orchestrator | Thursday 09 April 2026 02:00:03 +0000 (0:00:01.191) 0:00:23.111 ******** 2026-04-09 02:00:06.275577 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGJhnWHMuV9Yblo3BvSq/gnL4Qpj4qINoYX4JvYG+gDD) 2026-04-09 02:00:06.275581 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIWhV/XOmJcg3+793+VWnrJaiZoSkYXh3r+yJV+Q0mfTvRaVv3J4lajPQhTfzl00aaKcfL570LoXu8udnPwmRPcD6yOtAVXctpiJ4zV8XM5zViIQ7CXrz+rUmmjZIDvVn/mjiMphR3iUrXDif2UtbjLkPJVxbZ8J+ETOhXYoRmSr7Wtqf4IPntlwcBuNSEDxtjsSdBzkL/dwfmjdjRLT9jIatHfwx18BtG3/uNW6QjwXldn4zH8VceZ7zuFPhB15J4ao8bUnQ6r6ae4B1Wv3jsP3XHghtTKG7Wz4VuQ97j1KqxT++HiZiZmoU2Ypakjbhk0uc42znU6rl/Czoj0borGl0DnRk+ez/VThqfjXwqjxaswwHj+EzxUvzK5Chrb/FRo6ZDqDn/gmQaI73RjGCYS0bZnoQ/xLlxlBeCtstgHdWDrHS8TI26dxeKNoFjJZk7/0d0LxDdyz1fWMDxBOcG8tIwfXDf+dy5ypwmfSXHzd9oZZqEgxUDBkXb5PGq020=) 2026-04-09 02:00:06.275585 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNtxLSSKJk6BSc/F6eGYUQoG/z7mTiwafPGlP2eCJtMRC2+LrM/jPXAsZ7UAdiGrKXrjiGqXxx3O8EuApKPtdUk=) 2026-04-09 02:00:06.275589 | orchestrator | 2026-04-09 02:00:06.275593 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 02:00:06.275596 | orchestrator | Thursday 09 April 2026 02:00:05 +0000 (0:00:01.191) 0:00:24.302 ******** 2026-04-09 02:00:06.275605 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuH7NBLVxzxpwe1t/lSnlq2dpzcKj1VWuxTNo7Slpf4KWHKKmVkhyj8yB02ixEvaafHtzfmz4yvOfUUP5CKBPpYhzTvOvenMkkMe9OfaMaULCz7ZxSPUhm3DA98czY5IFi9/cU9uj1B2UKGkwi5jwmb7eF7KFlK6kf1jEUAB8ut56orAY6ULhU8b9k1ihWbFYpmXHXkSKMFsV14xeMvCQli9IDXKItZlgLegP7LZNH9aR3TkSGIXCFssLmIWauNSMFGyspFbojBe6WdkGSojZEgZo89Gr3oq+6/ShJIA9JVLwEppeXeLZ7PJoIpyCdlNTv1hCwVS1jDjVO0X7UcpMH8T7CKiDTWWenxTgpP/ddb0FPuR300cylwSCrxnSzyj/LujeMUERqAYEb+BeQom06NGMsdBHb3JnCLu6rYfVI3oWzdJ6oyhRfApyW2ct00X87SdBX1uXvOeW4CsQxKZaluPXl16jhXMJRD2cHPakUc9hjvQEjhrjnYMqinr19UtM=) 2026-04-09 02:00:11.421569 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJTfnWfiw8sYBn62jiRVAmNYu36IpTikoJBzeD+29GRv1akYvC5jrA6+3zsS9rCkzIGVsmsDjVAQPElO950DMmg=) 2026-04-09 02:00:11.421656 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIR02grlsyvgd0EYTFzLDceG8jqKFJ1spTMLBjmDW5hg) 2026-04-09 02:00:11.421684 | orchestrator | 2026-04-09 02:00:11.421692 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 02:00:11.421700 | orchestrator | Thursday 09 April 2026 02:00:06 +0000 (0:00:01.253) 0:00:25.556 ******** 2026-04-09 02:00:11.421707 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCW9EgeSvfDqWe/aTsIegyThQDvETMHXm/OdK0o2RgFGWDFzAnFgt1kcSC5QSpr8WW1cHRqkJq+XMM1rq8r5VB6kVMPx2QwwN/5BeJSzfabc2K7PxgtH2E537D32L8ErLBNpBvZEoI7zOoC5O0U8baYBYeVkc99cNSrd/0PtBf55nKQKpJG6viXmQnw6IB37m6b3vB68o/fVvEMOMNk4hMO71Ls83zRkEhQi3HjHgQwpvUXS3qjEGRZoF3uh66dwTVRagluB9LHy8cFw+xP1RnjcfPxYPASu6oKOyhpoqC97VZ70xcKnw3eIlO01kHQ32hGQeRa3H+xTdbYH6nFsQHzxeE2fMRdsUEi2t4l1H9hv1HIkezJsBfeaE+lX7eL7eFi82ZaIO+HTNa1IGmRK7YgmrMRP2OEu90NioSAPaabzoIN93f070VRgEZKDZaZ3KFCVztfErurNjEoRveg0XpPOQNK6ZhpswzX+3R9KE1GrabzR8aEFELVND/buf8P8K8=) 2026-04-09 02:00:11.421716 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHRoUw0vQtQPBaSllApOBsFgqaZfQNrA7dehNS0Of/xljT+aia7peHlzNdXn73PGl7ofgPGWLewnLWFwKsIU7ho=) 2026-04-09 02:00:11.421767 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICSZq4lNBJrZg16sOIHtoZwp3TI6FM/oCCoi6bnLYtwo) 2026-04-09 02:00:11.421778 | orchestrator | 2026-04-09 02:00:11.421788 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 02:00:11.421797 | orchestrator | Thursday 09 April 2026 02:00:07 +0000 (0:00:01.315) 0:00:26.872 ******** 2026-04-09 02:00:11.421807 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAeii96J9U8lujibMpOlMgcekjE+7UQ4aDEFg7NjD93xi6MyWd7PTYO1m/5ipKoOEG1sQxazW22PgOjzPOsGY8dQUhHdebgpNmSub+2EhljRD8ESHsx1+9aBzjvl53M2wAOaqan6rKFZiUbJQncNxRSGpfj5CAEGD2cdHOZSLOuYxgbJAjE+slxYGwhe256GGDA5ieSvkj9T9auYxJ8DYVUWIgPCcjXxtwjOwjMXb9HxA5PzhvLVfQkn5koEvtsLGBjV367GYTdLTB7DeYxghzbpTN1D6nUqrVZvST7uJvIFzHppe4fNOan9V51rfLyr4A2QYBV0uPpdUGTOxW03neVAAKY503IkURcGowFiJpQrHkv4PqSoShhb4I+Ghv9pT1IUEl140Gpl4O35eHWkohCVXDmZZAYAGFLcG5tB9VcN5TOxeAA9hSYteE55ZHoImIHCs3ZpVHZ0qlBumgO+JHa/Oy3rTyzsu4HLgpipQVaRYCDadawEILrbDtO+du0o8=) 2026-04-09 02:00:11.421817 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGBu73rJlueFP7yIaPA69UK+1jp4neQxt57KkP6IdGHu8LZQC7lZn8qpeVnTR0t7GWl++kGIk/TZj19/3YgyPQA=) 2026-04-09 02:00:11.421828 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMTIyiYsq9nivsmwM0QqYJM2+ZPjF41Yn75q9vDJ9X+2) 2026-04-09 02:00:11.421837 | orchestrator | 2026-04-09 02:00:11.421846 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 02:00:11.421856 | orchestrator | Thursday 09 April 2026 02:00:08 +0000 (0:00:01.177) 0:00:28.049 ******** 2026-04-09 02:00:11.421867 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJnrKLMwUkcZRdLvRnsAAg3PiZx/H/AO6IMgIkAUwG6Cz63tF8gR1Tjn32rrHN6E+ZDZ8H1Dq+bQlnxMc/r1AwE=) 2026-04-09 02:00:11.421894 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCPWPY9VlSWUhWC/ZP/7D3AxZpuor6+RiNKVN8p+xsyxBNuQVjw6/PYhVIUXX78ZQtrrsD9FuXNQXPDN4qNbeuuOkaDjMuYXeF3CKUWEiJhCRIkyj0/6HnHzeKfttc+k+RMIt56c/N2HmeMklkdYgytGXAwcOR2HxuuMnIZjWDqon6UQNFwrIuze0PkbDECFX0tkVQwCstDNYLJsrlZTzQZs+UjbX/bxsPTDB3WRtSt3YQj48lI9UVZGcv3KfVDJAxQnH8i/CWoAhiwOln9mKVPv068Dwig5S3R5Aue/3PFhi7tX/X8MmXncLY0BzwMyiS7mcJBpFDP0aIlxduRMpvKY6wnbbzNwjJqNLGcTFak2U1ASbqDnGAmZ8yXAm72x7DXohlyXqD9cp/C/nkVm2FkUPXxER7wYnBJjTc2hI+T1ONvJ+eEKG2VtOUZiyIniXi2gWowMldObFQaL1zUWkGy6Ezn9i2D1sPre50QiOiI+XVQsouXC+Vcm2CwFua2gus=) 2026-04-09 02:00:11.421904 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMzSp7aP4+hfdtNRNoKlj9VfwNlgfUNMzFX8F7bFo9fN) 2026-04-09 02:00:11.421910 | orchestrator | 2026-04-09 02:00:11.421916 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-09 02:00:11.421929 | orchestrator | Thursday 09 April 2026 02:00:09 +0000 (0:00:01.190) 0:00:29.240 ******** 2026-04-09 02:00:11.421936 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-09 02:00:11.421942 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-09 02:00:11.421963 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-09 02:00:11.421969 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-09 02:00:11.421975 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-09 02:00:11.421981 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-09 02:00:11.421986 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-09 02:00:11.421992 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:00:11.421999 | orchestrator | 2026-04-09 02:00:11.422005 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-09 02:00:11.422010 | orchestrator | Thursday 09 April 2026 02:00:10 +0000 (0:00:00.230) 0:00:29.470 ******** 2026-04-09 02:00:11.422056 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:00:11.422063 | orchestrator | 2026-04-09 02:00:11.422069 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-09 02:00:11.422079 | orchestrator | Thursday 09 April 2026 02:00:10 +0000 (0:00:00.061) 0:00:29.531 ******** 2026-04-09 02:00:11.422085 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:00:11.422091 | orchestrator | 2026-04-09 02:00:11.422098 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-09 02:00:11.422105 | orchestrator | Thursday 09 April 2026 02:00:10 +0000 (0:00:00.063) 0:00:29.595 ******** 2026-04-09 02:00:11.422113 | orchestrator | changed: [testbed-manager] 2026-04-09 02:00:11.422120 | orchestrator | 2026-04-09 02:00:11.422127 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:00:11.422135 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 02:00:11.422143 | orchestrator | 2026-04-09 02:00:11.422151 | orchestrator | 2026-04-09 02:00:11.422158 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:00:11.422165 | orchestrator | Thursday 09 April 2026 02:00:11 +0000 (0:00:00.837) 0:00:30.433 ******** 2026-04-09 02:00:11.422172 | orchestrator | =============================================================================== 2026-04-09 02:00:11.422179 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.26s 2026-04-09 02:00:11.422186 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.62s 2026-04-09 02:00:11.422194 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.32s 2026-04-09 02:00:11.422202 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.30s 2026-04-09 02:00:11.422209 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.25s 2026-04-09 02:00:11.422216 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-04-09 02:00:11.422223 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-04-09 02:00:11.422230 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-04-09 02:00:11.422237 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-04-09 02:00:11.422244 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-04-09 02:00:11.422251 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-04-09 02:00:11.422259 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-04-09 02:00:11.422266 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-04-09 02:00:11.422274 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-04-09 02:00:11.422293 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-04-09 02:00:11.422309 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-04-09 02:00:11.422318 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.84s 2026-04-09 02:00:11.422328 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.23s 2026-04-09 02:00:11.422339 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.19s 2026-04-09 02:00:11.422350 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-04-09 02:00:11.816898 | orchestrator | + osism apply squid 2026-04-09 02:00:24.091015 | orchestrator | 2026-04-09 02:00:24 | INFO  | Task 7c3cdcac-6c4d-437a-bea9-47dc515c43ce (squid) was prepared for execution. 2026-04-09 02:00:24.091144 | orchestrator | 2026-04-09 02:00:24 | INFO  | It takes a moment until task 7c3cdcac-6c4d-437a-bea9-47dc515c43ce (squid) has been started and output is visible here. 2026-04-09 02:02:24.845966 | orchestrator | 2026-04-09 02:02:24.846085 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-09 02:02:24.846095 | orchestrator | 2026-04-09 02:02:24.846100 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-09 02:02:24.846106 | orchestrator | Thursday 09 April 2026 02:00:28 +0000 (0:00:00.185) 0:00:00.185 ******** 2026-04-09 02:02:24.846111 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 02:02:24.846116 | orchestrator | 2026-04-09 02:02:24.846121 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-09 02:02:24.846126 | orchestrator | Thursday 09 April 2026 02:00:28 +0000 (0:00:00.097) 0:00:00.283 ******** 2026-04-09 02:02:24.846130 | orchestrator | ok: [testbed-manager] 2026-04-09 02:02:24.846136 | orchestrator | 2026-04-09 02:02:24.846143 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-09 02:02:24.846150 | orchestrator | Thursday 09 April 2026 02:00:30 +0000 (0:00:01.679) 0:00:01.962 ******** 2026-04-09 02:02:24.846158 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-09 02:02:24.846166 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-09 02:02:24.846174 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-09 02:02:24.846181 | orchestrator | 2026-04-09 02:02:24.846188 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-09 02:02:24.846196 | orchestrator | Thursday 09 April 2026 02:00:31 +0000 (0:00:01.247) 0:00:03.210 ******** 2026-04-09 02:02:24.846203 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-09 02:02:24.846211 | orchestrator | 2026-04-09 02:02:24.846216 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-09 02:02:24.846220 | orchestrator | Thursday 09 April 2026 02:00:32 +0000 (0:00:01.160) 0:00:04.370 ******** 2026-04-09 02:02:24.846225 | orchestrator | ok: [testbed-manager] 2026-04-09 02:02:24.846230 | orchestrator | 2026-04-09 02:02:24.846234 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-09 02:02:24.846239 | orchestrator | Thursday 09 April 2026 02:00:33 +0000 (0:00:00.383) 0:00:04.754 ******** 2026-04-09 02:02:24.846244 | orchestrator | changed: [testbed-manager] 2026-04-09 02:02:24.846249 | orchestrator | 2026-04-09 02:02:24.846254 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-09 02:02:24.846258 | orchestrator | Thursday 09 April 2026 02:00:34 +0000 (0:00:00.961) 0:00:05.715 ******** 2026-04-09 02:02:24.846263 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-09 02:02:24.846271 | orchestrator | ok: [testbed-manager] 2026-04-09 02:02:24.846275 | orchestrator | 2026-04-09 02:02:24.846280 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-09 02:02:24.846303 | orchestrator | Thursday 09 April 2026 02:01:11 +0000 (0:00:37.349) 0:00:43.065 ******** 2026-04-09 02:02:24.846308 | orchestrator | changed: [testbed-manager] 2026-04-09 02:02:24.846312 | orchestrator | 2026-04-09 02:02:24.846317 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-09 02:02:24.846322 | orchestrator | Thursday 09 April 2026 02:01:23 +0000 (0:00:12.207) 0:00:55.273 ******** 2026-04-09 02:02:24.846326 | orchestrator | Pausing for 60 seconds 2026-04-09 02:02:24.846331 | orchestrator | changed: [testbed-manager] 2026-04-09 02:02:24.846336 | orchestrator | 2026-04-09 02:02:24.846340 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-09 02:02:24.846345 | orchestrator | Thursday 09 April 2026 02:02:23 +0000 (0:01:00.083) 0:01:55.357 ******** 2026-04-09 02:02:24.846349 | orchestrator | ok: [testbed-manager] 2026-04-09 02:02:24.846353 | orchestrator | 2026-04-09 02:02:24.846358 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-09 02:02:24.846362 | orchestrator | Thursday 09 April 2026 02:02:23 +0000 (0:00:00.077) 0:01:55.434 ******** 2026-04-09 02:02:24.846366 | orchestrator | changed: [testbed-manager] 2026-04-09 02:02:24.846371 | orchestrator | 2026-04-09 02:02:24.846375 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:02:24.846380 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:02:24.846384 | orchestrator | 2026-04-09 02:02:24.846389 | orchestrator | 2026-04-09 02:02:24.846393 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:02:24.846398 | orchestrator | Thursday 09 April 2026 02:02:24 +0000 (0:00:00.658) 0:01:56.093 ******** 2026-04-09 02:02:24.846402 | orchestrator | =============================================================================== 2026-04-09 02:02:24.846406 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-04-09 02:02:24.846411 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 37.35s 2026-04-09 02:02:24.846415 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.21s 2026-04-09 02:02:24.846433 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.68s 2026-04-09 02:02:24.846437 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.25s 2026-04-09 02:02:24.846442 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.16s 2026-04-09 02:02:24.846446 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.96s 2026-04-09 02:02:24.846451 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.66s 2026-04-09 02:02:24.846455 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2026-04-09 02:02:24.846459 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2026-04-09 02:02:24.846464 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2026-04-09 02:02:25.204919 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-04-09 02:02:25.205032 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-09 02:02:25.267815 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-09 02:02:25.267932 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-04-09 02:02:25.273575 | orchestrator | + set -e 2026-04-09 02:02:25.273659 | orchestrator | + NAMESPACE=kolla/release 2026-04-09 02:02:25.273675 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-09 02:02:25.281574 | orchestrator | ++ semver 9.5.0 9.0.0 2026-04-09 02:02:25.347534 | orchestrator | + [[ 1 -lt 0 ]] 2026-04-09 02:02:25.347996 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-09 02:02:37.608325 | orchestrator | 2026-04-09 02:02:37 | INFO  | Task 95ece57e-c1b1-43ec-9b0d-429b6d293291 (operator) was prepared for execution. 2026-04-09 02:02:37.608406 | orchestrator | 2026-04-09 02:02:37 | INFO  | It takes a moment until task 95ece57e-c1b1-43ec-9b0d-429b6d293291 (operator) has been started and output is visible here. 2026-04-09 02:02:54.789610 | orchestrator | 2026-04-09 02:02:54.789719 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-09 02:02:54.789736 | orchestrator | 2026-04-09 02:02:54.789749 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 02:02:54.789760 | orchestrator | Thursday 09 April 2026 02:02:42 +0000 (0:00:00.163) 0:00:00.163 ******** 2026-04-09 02:02:54.789770 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:02:54.789838 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:02:54.789846 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:02:54.789853 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:02:54.789859 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:02:54.789866 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:02:54.789872 | orchestrator | 2026-04-09 02:02:54.789879 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-09 02:02:54.789886 | orchestrator | Thursday 09 April 2026 02:02:45 +0000 (0:00:03.521) 0:00:03.685 ******** 2026-04-09 02:02:54.789892 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:02:54.789898 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:02:54.789905 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:02:54.789926 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:02:54.789932 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:02:54.789938 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:02:54.789945 | orchestrator | 2026-04-09 02:02:54.789951 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-09 02:02:54.789957 | orchestrator | 2026-04-09 02:02:54.789964 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-09 02:02:54.789970 | orchestrator | Thursday 09 April 2026 02:02:46 +0000 (0:00:00.830) 0:00:04.516 ******** 2026-04-09 02:02:54.789976 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:02:54.789982 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:02:54.789989 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:02:54.789995 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:02:54.790001 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:02:54.790008 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:02:54.790055 | orchestrator | 2026-04-09 02:02:54.790064 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-09 02:02:54.790071 | orchestrator | Thursday 09 April 2026 02:02:46 +0000 (0:00:00.224) 0:00:04.740 ******** 2026-04-09 02:02:54.790077 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:02:54.790083 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:02:54.790090 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:02:54.790096 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:02:54.790102 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:02:54.790108 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:02:54.790114 | orchestrator | 2026-04-09 02:02:54.790121 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-09 02:02:54.790127 | orchestrator | Thursday 09 April 2026 02:02:47 +0000 (0:00:00.213) 0:00:04.953 ******** 2026-04-09 02:02:54.790134 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:02:54.790141 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:02:54.790147 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:02:54.790154 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:02:54.790160 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:02:54.790166 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:02:54.790172 | orchestrator | 2026-04-09 02:02:54.790180 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-09 02:02:54.790187 | orchestrator | Thursday 09 April 2026 02:02:47 +0000 (0:00:00.627) 0:00:05.581 ******** 2026-04-09 02:02:54.790195 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:02:54.790202 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:02:54.790210 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:02:54.790218 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:02:54.790225 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:02:54.790233 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:02:54.790259 | orchestrator | 2026-04-09 02:02:54.790266 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-09 02:02:54.790274 | orchestrator | Thursday 09 April 2026 02:02:48 +0000 (0:00:00.799) 0:00:06.380 ******** 2026-04-09 02:02:54.790282 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-09 02:02:54.790289 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-09 02:02:54.790296 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-09 02:02:54.790304 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-09 02:02:54.790311 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-09 02:02:54.790319 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-09 02:02:54.790326 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-09 02:02:54.790333 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-09 02:02:54.790340 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-09 02:02:54.790347 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-09 02:02:54.790354 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-09 02:02:54.790362 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-09 02:02:54.790369 | orchestrator | 2026-04-09 02:02:54.790377 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-09 02:02:54.790385 | orchestrator | Thursday 09 April 2026 02:02:49 +0000 (0:00:01.247) 0:00:07.628 ******** 2026-04-09 02:02:54.790392 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:02:54.790399 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:02:54.790405 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:02:54.790411 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:02:54.790417 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:02:54.790424 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:02:54.790430 | orchestrator | 2026-04-09 02:02:54.790436 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-09 02:02:54.790444 | orchestrator | Thursday 09 April 2026 02:02:51 +0000 (0:00:01.422) 0:00:09.050 ******** 2026-04-09 02:02:54.790450 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-09 02:02:54.790457 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-09 02:02:54.790463 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-09 02:02:54.790469 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 02:02:54.790492 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 02:02:54.790499 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 02:02:54.790505 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 02:02:54.790511 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 02:02:54.790518 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 02:02:54.790524 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-09 02:02:54.790531 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-09 02:02:54.790537 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-09 02:02:54.790543 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-09 02:02:54.790549 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-09 02:02:54.790555 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-09 02:02:54.790562 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-09 02:02:54.790568 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-09 02:02:54.790575 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-09 02:02:54.790581 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-09 02:02:54.790587 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-09 02:02:54.790602 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-09 02:02:54.790612 | orchestrator | 2026-04-09 02:02:54.790627 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-09 02:02:54.790642 | orchestrator | Thursday 09 April 2026 02:02:52 +0000 (0:00:01.319) 0:00:10.369 ******** 2026-04-09 02:02:54.790651 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:02:54.790659 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:02:54.790668 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:02:54.790678 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:02:54.790687 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:02:54.790696 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:02:54.790706 | orchestrator | 2026-04-09 02:02:54.790715 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-09 02:02:54.790724 | orchestrator | Thursday 09 April 2026 02:02:52 +0000 (0:00:00.183) 0:00:10.553 ******** 2026-04-09 02:02:54.790735 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:02:54.790745 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:02:54.790755 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:02:54.790764 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:02:54.790795 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:02:54.790815 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:02:54.790827 | orchestrator | 2026-04-09 02:02:54.790836 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-09 02:02:54.790846 | orchestrator | Thursday 09 April 2026 02:02:52 +0000 (0:00:00.204) 0:00:10.757 ******** 2026-04-09 02:02:54.790856 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:02:54.790866 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:02:54.790876 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:02:54.790887 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:02:54.790896 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:02:54.790902 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:02:54.790908 | orchestrator | 2026-04-09 02:02:54.790915 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-09 02:02:54.790921 | orchestrator | Thursday 09 April 2026 02:02:53 +0000 (0:00:00.597) 0:00:11.355 ******** 2026-04-09 02:02:54.790928 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:02:54.790934 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:02:54.790940 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:02:54.790947 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:02:54.790953 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:02:54.790959 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:02:54.790965 | orchestrator | 2026-04-09 02:02:54.790972 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-09 02:02:54.790978 | orchestrator | Thursday 09 April 2026 02:02:53 +0000 (0:00:00.240) 0:00:11.595 ******** 2026-04-09 02:02:54.790985 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-09 02:02:54.791001 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 02:02:54.791008 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:02:54.791014 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:02:54.791020 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 02:02:54.791027 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 02:02:54.791033 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 02:02:54.791039 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:02:54.791046 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:02:54.791052 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:02:54.791058 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-09 02:02:54.791064 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:02:54.791071 | orchestrator | 2026-04-09 02:02:54.791077 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-09 02:02:54.791083 | orchestrator | Thursday 09 April 2026 02:02:54 +0000 (0:00:00.727) 0:00:12.323 ******** 2026-04-09 02:02:54.791096 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:02:54.791103 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:02:54.791109 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:02:54.791115 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:02:54.791121 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:02:54.791128 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:02:54.791134 | orchestrator | 2026-04-09 02:02:54.791140 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-09 02:02:54.791147 | orchestrator | Thursday 09 April 2026 02:02:54 +0000 (0:00:00.189) 0:00:12.512 ******** 2026-04-09 02:02:54.791153 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:02:54.791160 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:02:54.791166 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:02:54.791172 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:02:54.791185 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:02:56.300229 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:02:56.300351 | orchestrator | 2026-04-09 02:02:56.300377 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-09 02:02:56.300395 | orchestrator | Thursday 09 April 2026 02:02:54 +0000 (0:00:00.196) 0:00:12.709 ******** 2026-04-09 02:02:56.300414 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:02:56.300431 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:02:56.300447 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:02:56.300465 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:02:56.300483 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:02:56.300500 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:02:56.300518 | orchestrator | 2026-04-09 02:02:56.300537 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-09 02:02:56.300556 | orchestrator | Thursday 09 April 2026 02:02:54 +0000 (0:00:00.202) 0:00:12.912 ******** 2026-04-09 02:02:56.300575 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:02:56.300595 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:02:56.300638 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:02:56.300659 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:02:56.300678 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:02:56.300696 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:02:56.300707 | orchestrator | 2026-04-09 02:02:56.300718 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-09 02:02:56.300730 | orchestrator | Thursday 09 April 2026 02:02:55 +0000 (0:00:00.739) 0:00:13.652 ******** 2026-04-09 02:02:56.300741 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:02:56.300752 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:02:56.300767 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:02:56.300815 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:02:56.300829 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:02:56.300842 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:02:56.300855 | orchestrator | 2026-04-09 02:02:56.300868 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:02:56.300882 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 02:02:56.300902 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 02:02:56.300922 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 02:02:56.300943 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 02:02:56.300963 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 02:02:56.301018 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 02:02:56.301037 | orchestrator | 2026-04-09 02:02:56.301056 | orchestrator | 2026-04-09 02:02:56.301077 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:02:56.301098 | orchestrator | Thursday 09 April 2026 02:02:55 +0000 (0:00:00.273) 0:00:13.925 ******** 2026-04-09 02:02:56.301119 | orchestrator | =============================================================================== 2026-04-09 02:02:56.301140 | orchestrator | Gathering Facts --------------------------------------------------------- 3.52s 2026-04-09 02:02:56.301154 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.42s 2026-04-09 02:02:56.301165 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.32s 2026-04-09 02:02:56.301177 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.25s 2026-04-09 02:02:56.301188 | orchestrator | Do not require tty for all users ---------------------------------------- 0.83s 2026-04-09 02:02:56.301199 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.80s 2026-04-09 02:02:56.301210 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.74s 2026-04-09 02:02:56.301221 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.73s 2026-04-09 02:02:56.301231 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.63s 2026-04-09 02:02:56.301242 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.60s 2026-04-09 02:02:56.301253 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.27s 2026-04-09 02:02:56.301263 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.24s 2026-04-09 02:02:56.301274 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.22s 2026-04-09 02:02:56.301285 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.21s 2026-04-09 02:02:56.301296 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.20s 2026-04-09 02:02:56.301307 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.20s 2026-04-09 02:02:56.301317 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.20s 2026-04-09 02:02:56.301328 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.19s 2026-04-09 02:02:56.301339 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2026-04-09 02:02:56.672460 | orchestrator | + osism apply --environment custom facts 2026-04-09 02:02:58.753671 | orchestrator | 2026-04-09 02:02:58 | INFO  | Trying to run play facts in environment custom 2026-04-09 02:03:08.878304 | orchestrator | 2026-04-09 02:03:08 | INFO  | Task d02a9e2d-21a2-49ad-a2f8-a044229b17ef (facts) was prepared for execution. 2026-04-09 02:03:08.878396 | orchestrator | 2026-04-09 02:03:08 | INFO  | It takes a moment until task d02a9e2d-21a2-49ad-a2f8-a044229b17ef (facts) has been started and output is visible here. 2026-04-09 02:03:54.549240 | orchestrator | 2026-04-09 02:03:54.549419 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-09 02:03:54.549448 | orchestrator | 2026-04-09 02:03:54.549468 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-09 02:03:54.549481 | orchestrator | Thursday 09 April 2026 02:03:13 +0000 (0:00:00.091) 0:00:00.091 ******** 2026-04-09 02:03:54.549493 | orchestrator | ok: [testbed-manager] 2026-04-09 02:03:54.549505 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:03:54.549517 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:03:54.549528 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:03:54.549539 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:03:54.549550 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:03:54.549587 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:03:54.549600 | orchestrator | 2026-04-09 02:03:54.549612 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-09 02:03:54.549623 | orchestrator | Thursday 09 April 2026 02:03:14 +0000 (0:00:01.402) 0:00:01.493 ******** 2026-04-09 02:03:54.549634 | orchestrator | ok: [testbed-manager] 2026-04-09 02:03:54.549645 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:03:54.549656 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:03:54.549667 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:03:54.549677 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:03:54.549688 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:03:54.549700 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:03:54.549719 | orchestrator | 2026-04-09 02:03:54.549743 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-09 02:03:54.549769 | orchestrator | 2026-04-09 02:03:54.549787 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-09 02:03:54.549887 | orchestrator | Thursday 09 April 2026 02:03:16 +0000 (0:00:01.333) 0:00:02.827 ******** 2026-04-09 02:03:54.549906 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:03:54.549922 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:03:54.549939 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:03:54.549957 | orchestrator | 2026-04-09 02:03:54.549974 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-09 02:03:54.549994 | orchestrator | Thursday 09 April 2026 02:03:16 +0000 (0:00:00.120) 0:00:02.947 ******** 2026-04-09 02:03:54.550010 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:03:54.550117 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:03:54.550131 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:03:54.550143 | orchestrator | 2026-04-09 02:03:54.550154 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-09 02:03:54.550165 | orchestrator | Thursday 09 April 2026 02:03:16 +0000 (0:00:00.211) 0:00:03.158 ******** 2026-04-09 02:03:54.550176 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:03:54.550187 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:03:54.550198 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:03:54.550209 | orchestrator | 2026-04-09 02:03:54.550220 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-09 02:03:54.550232 | orchestrator | Thursday 09 April 2026 02:03:16 +0000 (0:00:00.245) 0:00:03.404 ******** 2026-04-09 02:03:54.550245 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 02:03:54.550258 | orchestrator | 2026-04-09 02:03:54.550269 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-09 02:03:54.550280 | orchestrator | Thursday 09 April 2026 02:03:16 +0000 (0:00:00.176) 0:00:03.581 ******** 2026-04-09 02:03:54.550294 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:03:54.550317 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:03:54.550343 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:03:54.550360 | orchestrator | 2026-04-09 02:03:54.550378 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-09 02:03:54.550396 | orchestrator | Thursday 09 April 2026 02:03:17 +0000 (0:00:00.452) 0:00:04.033 ******** 2026-04-09 02:03:54.550412 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:03:54.550430 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:03:54.550448 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:03:54.550464 | orchestrator | 2026-04-09 02:03:54.550482 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-09 02:03:54.550501 | orchestrator | Thursday 09 April 2026 02:03:17 +0000 (0:00:00.147) 0:00:04.181 ******** 2026-04-09 02:03:54.550519 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:03:54.550539 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:03:54.550558 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:03:54.550575 | orchestrator | 2026-04-09 02:03:54.550594 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-09 02:03:54.550623 | orchestrator | Thursday 09 April 2026 02:03:18 +0000 (0:00:01.084) 0:00:05.266 ******** 2026-04-09 02:03:54.550635 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:03:54.550646 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:03:54.550657 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:03:54.550667 | orchestrator | 2026-04-09 02:03:54.550678 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-09 02:03:54.550690 | orchestrator | Thursday 09 April 2026 02:03:19 +0000 (0:00:00.497) 0:00:05.763 ******** 2026-04-09 02:03:54.550701 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:03:54.550712 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:03:54.550723 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:03:54.550734 | orchestrator | 2026-04-09 02:03:54.550745 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-09 02:03:54.550838 | orchestrator | Thursday 09 April 2026 02:03:20 +0000 (0:00:01.035) 0:00:06.799 ******** 2026-04-09 02:03:54.550854 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:03:54.550865 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:03:54.550876 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:03:54.550887 | orchestrator | 2026-04-09 02:03:54.550898 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-09 02:03:54.550909 | orchestrator | Thursday 09 April 2026 02:03:36 +0000 (0:00:16.485) 0:00:23.284 ******** 2026-04-09 02:03:54.550922 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:03:54.550944 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:03:54.550969 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:03:54.550987 | orchestrator | 2026-04-09 02:03:54.551005 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-09 02:03:54.551052 | orchestrator | Thursday 09 April 2026 02:03:36 +0000 (0:00:00.118) 0:00:23.402 ******** 2026-04-09 02:03:54.551070 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:03:54.551088 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:03:54.551105 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:03:54.551124 | orchestrator | 2026-04-09 02:03:54.551151 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-09 02:03:54.551168 | orchestrator | Thursday 09 April 2026 02:03:45 +0000 (0:00:08.634) 0:00:32.036 ******** 2026-04-09 02:03:54.551186 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:03:54.551203 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:03:54.551220 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:03:54.551239 | orchestrator | 2026-04-09 02:03:54.551259 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-09 02:03:54.551278 | orchestrator | Thursday 09 April 2026 02:03:45 +0000 (0:00:00.460) 0:00:32.497 ******** 2026-04-09 02:03:54.551296 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-09 02:03:54.551316 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-09 02:03:54.551334 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-09 02:03:54.551353 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-09 02:03:54.551372 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-09 02:03:54.551383 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-09 02:03:54.551394 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-09 02:03:54.551405 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-09 02:03:54.551416 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-09 02:03:54.551427 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-09 02:03:54.551438 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-09 02:03:54.551449 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-09 02:03:54.551460 | orchestrator | 2026-04-09 02:03:54.551471 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-09 02:03:54.551495 | orchestrator | Thursday 09 April 2026 02:03:49 +0000 (0:00:03.578) 0:00:36.076 ******** 2026-04-09 02:03:54.551506 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:03:54.551517 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:03:54.551528 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:03:54.551539 | orchestrator | 2026-04-09 02:03:54.551551 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-09 02:03:54.551561 | orchestrator | 2026-04-09 02:03:54.551572 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 02:03:54.551583 | orchestrator | Thursday 09 April 2026 02:03:50 +0000 (0:00:01.361) 0:00:37.437 ******** 2026-04-09 02:03:54.551594 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:03:54.551605 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:03:54.551617 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:03:54.551628 | orchestrator | ok: [testbed-manager] 2026-04-09 02:03:54.551639 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:03:54.551649 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:03:54.551660 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:03:54.551671 | orchestrator | 2026-04-09 02:03:54.551682 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:03:54.551694 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:03:54.551706 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:03:54.551718 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:03:54.551729 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:03:54.551741 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:03:54.551752 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:03:54.551763 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:03:54.551774 | orchestrator | 2026-04-09 02:03:54.551785 | orchestrator | 2026-04-09 02:03:54.551875 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:03:54.551889 | orchestrator | Thursday 09 April 2026 02:03:54 +0000 (0:00:03.802) 0:00:41.239 ******** 2026-04-09 02:03:54.551900 | orchestrator | =============================================================================== 2026-04-09 02:03:54.551911 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.49s 2026-04-09 02:03:54.551922 | orchestrator | Install required packages (Debian) -------------------------------------- 8.63s 2026-04-09 02:03:54.551933 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.80s 2026-04-09 02:03:54.551944 | orchestrator | Copy fact files --------------------------------------------------------- 3.58s 2026-04-09 02:03:54.551955 | orchestrator | Create custom facts directory ------------------------------------------- 1.40s 2026-04-09 02:03:54.551966 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.36s 2026-04-09 02:03:54.551989 | orchestrator | Copy fact file ---------------------------------------------------------- 1.33s 2026-04-09 02:03:54.839433 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.08s 2026-04-09 02:03:54.839560 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.04s 2026-04-09 02:03:54.839611 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.50s 2026-04-09 02:03:54.839659 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2026-04-09 02:03:54.839678 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2026-04-09 02:03:54.839695 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.25s 2026-04-09 02:03:54.839713 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2026-04-09 02:03:54.839731 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.18s 2026-04-09 02:03:54.839749 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.15s 2026-04-09 02:03:54.839768 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2026-04-09 02:03:54.839786 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2026-04-09 02:03:55.236985 | orchestrator | + osism apply bootstrap 2026-04-09 02:04:07.567907 | orchestrator | 2026-04-09 02:04:07 | INFO  | Task d4cbdd98-5475-42f1-a03a-64f98df50c22 (bootstrap) was prepared for execution. 2026-04-09 02:04:07.568023 | orchestrator | 2026-04-09 02:04:07 | INFO  | It takes a moment until task d4cbdd98-5475-42f1-a03a-64f98df50c22 (bootstrap) has been started and output is visible here. 2026-04-09 02:04:25.295958 | orchestrator | 2026-04-09 02:04:25.296054 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-09 02:04:25.296061 | orchestrator | 2026-04-09 02:04:25.296067 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-09 02:04:25.296072 | orchestrator | Thursday 09 April 2026 02:04:12 +0000 (0:00:00.166) 0:00:00.166 ******** 2026-04-09 02:04:25.296076 | orchestrator | ok: [testbed-manager] 2026-04-09 02:04:25.296082 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:04:25.296086 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:04:25.296090 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:04:25.296094 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:04:25.296098 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:04:25.296102 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:04:25.296106 | orchestrator | 2026-04-09 02:04:25.296110 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-09 02:04:25.296114 | orchestrator | 2026-04-09 02:04:25.296118 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 02:04:25.296122 | orchestrator | Thursday 09 April 2026 02:04:12 +0000 (0:00:00.274) 0:00:00.441 ******** 2026-04-09 02:04:25.296126 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:04:25.296130 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:04:25.296134 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:04:25.296138 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:04:25.296141 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:04:25.296145 | orchestrator | ok: [testbed-manager] 2026-04-09 02:04:25.296149 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:04:25.296153 | orchestrator | 2026-04-09 02:04:25.296157 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-09 02:04:25.296161 | orchestrator | 2026-04-09 02:04:25.296165 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 02:04:25.296168 | orchestrator | Thursday 09 April 2026 02:04:17 +0000 (0:00:04.585) 0:00:05.027 ******** 2026-04-09 02:04:25.296173 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-09 02:04:25.296177 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-09 02:04:25.296181 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-09 02:04:25.296185 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 02:04:25.296189 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-09 02:04:25.296193 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-09 02:04:25.296197 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 02:04:25.296200 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-09 02:04:25.296204 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-09 02:04:25.296225 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-09 02:04:25.296229 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 02:04:25.296233 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-09 02:04:25.296237 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-09 02:04:25.296240 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-09 02:04:25.296244 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-09 02:04:25.296249 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-09 02:04:25.296252 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-09 02:04:25.296256 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-09 02:04:25.296260 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-09 02:04:25.296264 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-09 02:04:25.296268 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-09 02:04:25.296271 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-09 02:04:25.296276 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-09 02:04:25.296282 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-09 02:04:25.296288 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-09 02:04:25.296294 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-09 02:04:25.296299 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-09 02:04:25.296305 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-09 02:04:25.296310 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-09 02:04:25.296317 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-09 02:04:25.296323 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:04:25.296329 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-09 02:04:25.296335 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:04:25.296342 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-09 02:04:25.296346 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-09 02:04:25.296350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-09 02:04:25.296354 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:04:25.296370 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-09 02:04:25.296374 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-09 02:04:25.296384 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:04:25.296389 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-09 02:04:25.296396 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-09 02:04:25.296402 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-09 02:04:25.296408 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-09 02:04:25.296414 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-09 02:04:25.296421 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-09 02:04:25.296444 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-09 02:04:25.296452 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-09 02:04:25.296459 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 02:04:25.296466 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:04:25.296473 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-09 02:04:25.296479 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 02:04:25.296485 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-09 02:04:25.296492 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:04:25.296507 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 02:04:25.296530 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:04:25.296538 | orchestrator | 2026-04-09 02:04:25.296545 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-09 02:04:25.296552 | orchestrator | 2026-04-09 02:04:25.296558 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-09 02:04:25.296565 | orchestrator | Thursday 09 April 2026 02:04:17 +0000 (0:00:00.548) 0:00:05.575 ******** 2026-04-09 02:04:25.296571 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:04:25.296576 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:04:25.296580 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:04:25.296585 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:04:25.296590 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:04:25.296594 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:04:25.296599 | orchestrator | ok: [testbed-manager] 2026-04-09 02:04:25.296604 | orchestrator | 2026-04-09 02:04:25.296609 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-09 02:04:25.296614 | orchestrator | Thursday 09 April 2026 02:04:18 +0000 (0:00:01.238) 0:00:06.813 ******** 2026-04-09 02:04:25.296618 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:04:25.296623 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:04:25.296627 | orchestrator | ok: [testbed-manager] 2026-04-09 02:04:25.296632 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:04:25.296637 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:04:25.296641 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:04:25.296646 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:04:25.296650 | orchestrator | 2026-04-09 02:04:25.296655 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-09 02:04:25.296659 | orchestrator | Thursday 09 April 2026 02:04:20 +0000 (0:00:01.255) 0:00:08.069 ******** 2026-04-09 02:04:25.296665 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:04:25.296672 | orchestrator | 2026-04-09 02:04:25.296677 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-09 02:04:25.296681 | orchestrator | Thursday 09 April 2026 02:04:20 +0000 (0:00:00.318) 0:00:08.387 ******** 2026-04-09 02:04:25.296686 | orchestrator | changed: [testbed-manager] 2026-04-09 02:04:25.296691 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:04:25.296695 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:04:25.296700 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:04:25.296705 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:04:25.296709 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:04:25.296714 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:04:25.296718 | orchestrator | 2026-04-09 02:04:25.296723 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-09 02:04:25.296727 | orchestrator | Thursday 09 April 2026 02:04:22 +0000 (0:00:02.143) 0:00:10.531 ******** 2026-04-09 02:04:25.296732 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:04:25.296737 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:04:25.296743 | orchestrator | 2026-04-09 02:04:25.296748 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-09 02:04:25.296753 | orchestrator | Thursday 09 April 2026 02:04:22 +0000 (0:00:00.321) 0:00:10.852 ******** 2026-04-09 02:04:25.296758 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:04:25.296762 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:04:25.296767 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:04:25.296771 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:04:25.296776 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:04:25.296780 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:04:25.296789 | orchestrator | 2026-04-09 02:04:25.296796 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-09 02:04:25.296802 | orchestrator | Thursday 09 April 2026 02:04:23 +0000 (0:00:01.008) 0:00:11.860 ******** 2026-04-09 02:04:25.296832 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:04:25.296836 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:04:25.296840 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:04:25.296844 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:04:25.296847 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:04:25.296851 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:04:25.296855 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:04:25.296858 | orchestrator | 2026-04-09 02:04:25.296862 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-09 02:04:25.296866 | orchestrator | Thursday 09 April 2026 02:04:24 +0000 (0:00:00.669) 0:00:12.530 ******** 2026-04-09 02:04:25.296870 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:04:25.296873 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:04:25.296877 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:04:25.296881 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:04:25.296885 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:04:25.296888 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:04:25.296892 | orchestrator | ok: [testbed-manager] 2026-04-09 02:04:25.296896 | orchestrator | 2026-04-09 02:04:25.296900 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-09 02:04:25.296905 | orchestrator | Thursday 09 April 2026 02:04:25 +0000 (0:00:00.470) 0:00:13.000 ******** 2026-04-09 02:04:25.296909 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:04:25.296912 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:04:25.296921 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:04:38.901019 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:04:38.901131 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:04:38.901148 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:04:38.901159 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:04:38.901171 | orchestrator | 2026-04-09 02:04:38.901184 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-09 02:04:38.901196 | orchestrator | Thursday 09 April 2026 02:04:25 +0000 (0:00:00.274) 0:00:13.275 ******** 2026-04-09 02:04:38.901210 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:04:38.901240 | orchestrator | 2026-04-09 02:04:38.901252 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-09 02:04:38.901264 | orchestrator | Thursday 09 April 2026 02:04:25 +0000 (0:00:00.351) 0:00:13.626 ******** 2026-04-09 02:04:38.901276 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:04:38.901287 | orchestrator | 2026-04-09 02:04:38.901299 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-09 02:04:38.901310 | orchestrator | Thursday 09 April 2026 02:04:26 +0000 (0:00:00.365) 0:00:13.992 ******** 2026-04-09 02:04:38.901321 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:04:38.901332 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:04:38.901343 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:04:38.901354 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:04:38.901365 | orchestrator | ok: [testbed-manager] 2026-04-09 02:04:38.901376 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:04:38.901387 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:04:38.901398 | orchestrator | 2026-04-09 02:04:38.901409 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-09 02:04:38.901420 | orchestrator | Thursday 09 April 2026 02:04:27 +0000 (0:00:01.446) 0:00:15.439 ******** 2026-04-09 02:04:38.901456 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:04:38.901471 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:04:38.901484 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:04:38.901497 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:04:38.901510 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:04:38.901523 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:04:38.901535 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:04:38.901547 | orchestrator | 2026-04-09 02:04:38.901560 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-09 02:04:38.901573 | orchestrator | Thursday 09 April 2026 02:04:27 +0000 (0:00:00.253) 0:00:15.692 ******** 2026-04-09 02:04:38.901587 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:04:38.901600 | orchestrator | ok: [testbed-manager] 2026-04-09 02:04:38.901613 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:04:38.901626 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:04:38.901638 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:04:38.901651 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:04:38.901665 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:04:38.901678 | orchestrator | 2026-04-09 02:04:38.901690 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-09 02:04:38.901703 | orchestrator | Thursday 09 April 2026 02:04:28 +0000 (0:00:00.527) 0:00:16.220 ******** 2026-04-09 02:04:38.901716 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:04:38.901728 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:04:38.901741 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:04:38.901755 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:04:38.901767 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:04:38.901780 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:04:38.901793 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:04:38.901808 | orchestrator | 2026-04-09 02:04:38.901888 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-09 02:04:38.901901 | orchestrator | Thursday 09 April 2026 02:04:28 +0000 (0:00:00.384) 0:00:16.605 ******** 2026-04-09 02:04:38.901912 | orchestrator | ok: [testbed-manager] 2026-04-09 02:04:38.901923 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:04:38.901934 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:04:38.901945 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:04:38.901956 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:04:38.901966 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:04:38.901987 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:04:38.901998 | orchestrator | 2026-04-09 02:04:38.902009 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-09 02:04:38.902084 | orchestrator | Thursday 09 April 2026 02:04:29 +0000 (0:00:00.545) 0:00:17.150 ******** 2026-04-09 02:04:38.902095 | orchestrator | ok: [testbed-manager] 2026-04-09 02:04:38.902107 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:04:38.902117 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:04:38.902166 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:04:38.902177 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:04:38.902188 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:04:38.902199 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:04:38.902210 | orchestrator | 2026-04-09 02:04:38.902222 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-09 02:04:38.902233 | orchestrator | Thursday 09 April 2026 02:04:30 +0000 (0:00:01.105) 0:00:18.256 ******** 2026-04-09 02:04:38.902244 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:04:38.902255 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:04:38.902266 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:04:38.902277 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:04:38.902288 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:04:38.902299 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:04:38.902310 | orchestrator | ok: [testbed-manager] 2026-04-09 02:04:38.902321 | orchestrator | 2026-04-09 02:04:38.902332 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-09 02:04:38.902355 | orchestrator | Thursday 09 April 2026 02:04:31 +0000 (0:00:01.102) 0:00:19.358 ******** 2026-04-09 02:04:38.902388 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:04:38.902400 | orchestrator | 2026-04-09 02:04:38.902410 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-09 02:04:38.902420 | orchestrator | Thursday 09 April 2026 02:04:31 +0000 (0:00:00.362) 0:00:19.721 ******** 2026-04-09 02:04:38.902429 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:04:38.902439 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:04:38.902449 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:04:38.902458 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:04:38.902468 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:04:38.902477 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:04:38.902487 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:04:38.902497 | orchestrator | 2026-04-09 02:04:38.902506 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-09 02:04:38.902516 | orchestrator | Thursday 09 April 2026 02:04:34 +0000 (0:00:02.284) 0:00:22.005 ******** 2026-04-09 02:04:38.902526 | orchestrator | ok: [testbed-manager] 2026-04-09 02:04:38.902535 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:04:38.902545 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:04:38.902554 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:04:38.902564 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:04:38.902574 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:04:38.902583 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:04:38.902593 | orchestrator | 2026-04-09 02:04:38.902603 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-09 02:04:38.902612 | orchestrator | Thursday 09 April 2026 02:04:34 +0000 (0:00:00.240) 0:00:22.246 ******** 2026-04-09 02:04:38.902622 | orchestrator | ok: [testbed-manager] 2026-04-09 02:04:38.902631 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:04:38.902641 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:04:38.902650 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:04:38.902660 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:04:38.902669 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:04:38.902679 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:04:38.902689 | orchestrator | 2026-04-09 02:04:38.902698 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-09 02:04:38.902708 | orchestrator | Thursday 09 April 2026 02:04:34 +0000 (0:00:00.274) 0:00:22.520 ******** 2026-04-09 02:04:38.902718 | orchestrator | ok: [testbed-manager] 2026-04-09 02:04:38.902727 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:04:38.902737 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:04:38.902747 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:04:38.902756 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:04:38.902766 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:04:38.902776 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:04:38.902785 | orchestrator | 2026-04-09 02:04:38.902795 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-09 02:04:38.902805 | orchestrator | Thursday 09 April 2026 02:04:34 +0000 (0:00:00.245) 0:00:22.766 ******** 2026-04-09 02:04:38.902838 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:04:38.902850 | orchestrator | 2026-04-09 02:04:38.902860 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-09 02:04:38.902870 | orchestrator | Thursday 09 April 2026 02:04:35 +0000 (0:00:00.314) 0:00:23.081 ******** 2026-04-09 02:04:38.902880 | orchestrator | ok: [testbed-manager] 2026-04-09 02:04:38.902889 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:04:38.902906 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:04:38.902916 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:04:38.902925 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:04:38.902935 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:04:38.902945 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:04:38.902954 | orchestrator | 2026-04-09 02:04:38.902964 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-09 02:04:38.902974 | orchestrator | Thursday 09 April 2026 02:04:35 +0000 (0:00:00.531) 0:00:23.612 ******** 2026-04-09 02:04:38.902984 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:04:38.902993 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:04:38.903003 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:04:38.903013 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:04:38.903023 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:04:38.903032 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:04:38.903042 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:04:38.903052 | orchestrator | 2026-04-09 02:04:38.903062 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-09 02:04:38.903072 | orchestrator | Thursday 09 April 2026 02:04:36 +0000 (0:00:00.268) 0:00:23.880 ******** 2026-04-09 02:04:38.903082 | orchestrator | ok: [testbed-manager] 2026-04-09 02:04:38.903091 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:04:38.903101 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:04:38.903111 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:04:38.903120 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:04:38.903130 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:04:38.903140 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:04:38.903149 | orchestrator | 2026-04-09 02:04:38.903159 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-09 02:04:38.903169 | orchestrator | Thursday 09 April 2026 02:04:37 +0000 (0:00:01.095) 0:00:24.976 ******** 2026-04-09 02:04:38.903178 | orchestrator | ok: [testbed-manager] 2026-04-09 02:04:38.903188 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:04:38.903198 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:04:38.903207 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:04:38.903217 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:04:38.903227 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:04:38.903236 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:04:38.903246 | orchestrator | 2026-04-09 02:04:38.903256 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-09 02:04:38.903265 | orchestrator | Thursday 09 April 2026 02:04:37 +0000 (0:00:00.638) 0:00:25.615 ******** 2026-04-09 02:04:38.903275 | orchestrator | ok: [testbed-manager] 2026-04-09 02:04:38.903285 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:04:38.903295 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:04:38.903313 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:04:38.903330 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:05:21.520505 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:05:21.520640 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:05:21.520667 | orchestrator | 2026-04-09 02:05:21.520688 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-09 02:05:21.520708 | orchestrator | Thursday 09 April 2026 02:04:38 +0000 (0:00:01.149) 0:00:26.765 ******** 2026-04-09 02:05:21.520726 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:05:21.520745 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:05:21.520763 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:05:21.520781 | orchestrator | changed: [testbed-manager] 2026-04-09 02:05:21.520800 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:05:21.520817 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:05:21.520864 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:05:21.520883 | orchestrator | 2026-04-09 02:05:21.520900 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-09 02:05:21.520919 | orchestrator | Thursday 09 April 2026 02:04:54 +0000 (0:00:15.979) 0:00:42.745 ******** 2026-04-09 02:05:21.520937 | orchestrator | ok: [testbed-manager] 2026-04-09 02:05:21.520985 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:05:21.521004 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:05:21.521021 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:05:21.521039 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:05:21.521058 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:05:21.521077 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:05:21.521096 | orchestrator | 2026-04-09 02:05:21.521113 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-09 02:05:21.521130 | orchestrator | Thursday 09 April 2026 02:04:55 +0000 (0:00:00.253) 0:00:42.998 ******** 2026-04-09 02:05:21.521148 | orchestrator | ok: [testbed-manager] 2026-04-09 02:05:21.521166 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:05:21.521181 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:05:21.521198 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:05:21.521213 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:05:21.521230 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:05:21.521247 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:05:21.521265 | orchestrator | 2026-04-09 02:05:21.521283 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-09 02:05:21.521302 | orchestrator | Thursday 09 April 2026 02:04:55 +0000 (0:00:00.260) 0:00:43.259 ******** 2026-04-09 02:05:21.521321 | orchestrator | ok: [testbed-manager] 2026-04-09 02:05:21.521338 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:05:21.521357 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:05:21.521375 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:05:21.521392 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:05:21.521409 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:05:21.521426 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:05:21.521444 | orchestrator | 2026-04-09 02:05:21.521461 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-09 02:05:21.521478 | orchestrator | Thursday 09 April 2026 02:04:55 +0000 (0:00:00.270) 0:00:43.530 ******** 2026-04-09 02:05:21.521499 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:05:21.521519 | orchestrator | 2026-04-09 02:05:21.521537 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-09 02:05:21.521553 | orchestrator | Thursday 09 April 2026 02:04:55 +0000 (0:00:00.323) 0:00:43.853 ******** 2026-04-09 02:05:21.521569 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:05:21.521586 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:05:21.521603 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:05:21.521619 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:05:21.521636 | orchestrator | ok: [testbed-manager] 2026-04-09 02:05:21.521651 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:05:21.521667 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:05:21.521683 | orchestrator | 2026-04-09 02:05:21.521699 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-09 02:05:21.521716 | orchestrator | Thursday 09 April 2026 02:04:57 +0000 (0:00:01.718) 0:00:45.572 ******** 2026-04-09 02:05:21.521732 | orchestrator | changed: [testbed-manager] 2026-04-09 02:05:21.521748 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:05:21.521764 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:05:21.521778 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:05:21.521793 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:05:21.521809 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:05:21.521909 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:05:21.521927 | orchestrator | 2026-04-09 02:05:21.521942 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-09 02:05:21.521979 | orchestrator | Thursday 09 April 2026 02:04:58 +0000 (0:00:01.130) 0:00:46.702 ******** 2026-04-09 02:05:21.521997 | orchestrator | ok: [testbed-manager] 2026-04-09 02:05:21.522014 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:05:21.522110 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:05:21.522143 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:05:21.522159 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:05:21.522174 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:05:21.522190 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:05:21.522206 | orchestrator | 2026-04-09 02:05:21.522223 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-09 02:05:21.522239 | orchestrator | Thursday 09 April 2026 02:04:59 +0000 (0:00:00.831) 0:00:47.533 ******** 2026-04-09 02:05:21.522256 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:05:21.522274 | orchestrator | 2026-04-09 02:05:21.522291 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-09 02:05:21.522309 | orchestrator | Thursday 09 April 2026 02:05:00 +0000 (0:00:00.358) 0:00:47.892 ******** 2026-04-09 02:05:21.522325 | orchestrator | changed: [testbed-manager] 2026-04-09 02:05:21.522342 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:05:21.522358 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:05:21.522375 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:05:21.522391 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:05:21.522407 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:05:21.522423 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:05:21.522438 | orchestrator | 2026-04-09 02:05:21.522485 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-09 02:05:21.522503 | orchestrator | Thursday 09 April 2026 02:05:01 +0000 (0:00:01.012) 0:00:48.905 ******** 2026-04-09 02:05:21.522520 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:05:21.522536 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:05:21.522552 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:05:21.522568 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:05:21.522585 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:05:21.522600 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:05:21.522615 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:05:21.522631 | orchestrator | 2026-04-09 02:05:21.522648 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-09 02:05:21.522664 | orchestrator | Thursday 09 April 2026 02:05:01 +0000 (0:00:00.275) 0:00:49.180 ******** 2026-04-09 02:05:21.522680 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:05:21.522696 | orchestrator | 2026-04-09 02:05:21.522712 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-09 02:05:21.522728 | orchestrator | Thursday 09 April 2026 02:05:01 +0000 (0:00:00.428) 0:00:49.609 ******** 2026-04-09 02:05:21.522744 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:05:21.522759 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:05:21.522775 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:05:21.522791 | orchestrator | ok: [testbed-manager] 2026-04-09 02:05:21.522807 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:05:21.522822 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:05:21.522915 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:05:21.522930 | orchestrator | 2026-04-09 02:05:21.522946 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-09 02:05:21.522962 | orchestrator | Thursday 09 April 2026 02:05:03 +0000 (0:00:01.744) 0:00:51.353 ******** 2026-04-09 02:05:21.522978 | orchestrator | changed: [testbed-manager] 2026-04-09 02:05:21.522993 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:05:21.523005 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:05:21.523018 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:05:21.523031 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:05:21.523043 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:05:21.523055 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:05:21.523084 | orchestrator | 2026-04-09 02:05:21.523098 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-09 02:05:21.523111 | orchestrator | Thursday 09 April 2026 02:05:04 +0000 (0:00:01.138) 0:00:52.491 ******** 2026-04-09 02:05:21.523126 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:05:21.523139 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:05:21.523152 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:05:21.523166 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:05:21.523178 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:05:21.523190 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:05:21.523203 | orchestrator | changed: [testbed-manager] 2026-04-09 02:05:21.523215 | orchestrator | 2026-04-09 02:05:21.523229 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-09 02:05:21.523242 | orchestrator | Thursday 09 April 2026 02:05:18 +0000 (0:00:14.014) 0:01:06.506 ******** 2026-04-09 02:05:21.523256 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:05:21.523269 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:05:21.523283 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:05:21.523296 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:05:21.523308 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:05:21.523320 | orchestrator | ok: [testbed-manager] 2026-04-09 02:05:21.523333 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:05:21.523347 | orchestrator | 2026-04-09 02:05:21.523360 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-09 02:05:21.523374 | orchestrator | Thursday 09 April 2026 02:05:19 +0000 (0:00:01.070) 0:01:07.577 ******** 2026-04-09 02:05:21.523388 | orchestrator | ok: [testbed-manager] 2026-04-09 02:05:21.523402 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:05:21.523416 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:05:21.523430 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:05:21.523444 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:05:21.523459 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:05:21.523473 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:05:21.523488 | orchestrator | 2026-04-09 02:05:21.523502 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-09 02:05:21.523516 | orchestrator | Thursday 09 April 2026 02:05:20 +0000 (0:00:00.942) 0:01:08.520 ******** 2026-04-09 02:05:21.523542 | orchestrator | ok: [testbed-manager] 2026-04-09 02:05:21.523557 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:05:21.523571 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:05:21.523585 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:05:21.523598 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:05:21.523613 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:05:21.523627 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:05:21.523640 | orchestrator | 2026-04-09 02:05:21.523655 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-09 02:05:21.523671 | orchestrator | Thursday 09 April 2026 02:05:20 +0000 (0:00:00.277) 0:01:08.797 ******** 2026-04-09 02:05:21.523685 | orchestrator | ok: [testbed-manager] 2026-04-09 02:05:21.523699 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:05:21.523713 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:05:21.523726 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:05:21.523738 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:05:21.523751 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:05:21.523763 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:05:21.523776 | orchestrator | 2026-04-09 02:05:21.523787 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-09 02:05:21.523800 | orchestrator | Thursday 09 April 2026 02:05:21 +0000 (0:00:00.254) 0:01:09.052 ******** 2026-04-09 02:05:21.523814 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:05:21.523852 | orchestrator | 2026-04-09 02:05:21.523881 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-09 02:07:42.487531 | orchestrator | Thursday 09 April 2026 02:05:21 +0000 (0:00:00.333) 0:01:09.386 ******** 2026-04-09 02:07:42.487609 | orchestrator | ok: [testbed-manager] 2026-04-09 02:07:42.487617 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:07:42.487622 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:07:42.487627 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:07:42.487635 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:07:42.487641 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:07:42.487648 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:07:42.487655 | orchestrator | 2026-04-09 02:07:42.487662 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-09 02:07:42.487669 | orchestrator | Thursday 09 April 2026 02:05:23 +0000 (0:00:01.732) 0:01:11.119 ******** 2026-04-09 02:07:42.487675 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:07:42.487683 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:07:42.487689 | orchestrator | changed: [testbed-manager] 2026-04-09 02:07:42.487695 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:07:42.487701 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:07:42.487707 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:07:42.487722 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:07:42.487729 | orchestrator | 2026-04-09 02:07:42.487735 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-09 02:07:42.487750 | orchestrator | Thursday 09 April 2026 02:05:23 +0000 (0:00:00.602) 0:01:11.721 ******** 2026-04-09 02:07:42.487757 | orchestrator | ok: [testbed-manager] 2026-04-09 02:07:42.487764 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:07:42.487770 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:07:42.487776 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:07:42.487782 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:07:42.487788 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:07:42.487794 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:07:42.487800 | orchestrator | 2026-04-09 02:07:42.487807 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-09 02:07:42.487814 | orchestrator | Thursday 09 April 2026 02:05:24 +0000 (0:00:00.278) 0:01:12.000 ******** 2026-04-09 02:07:42.487821 | orchestrator | ok: [testbed-manager] 2026-04-09 02:07:42.487827 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:07:42.487833 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:07:42.487840 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:07:42.487845 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:07:42.487849 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:07:42.487853 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:07:42.487857 | orchestrator | 2026-04-09 02:07:42.487861 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-09 02:07:42.487866 | orchestrator | Thursday 09 April 2026 02:05:25 +0000 (0:00:01.371) 0:01:13.372 ******** 2026-04-09 02:07:42.487870 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:07:42.487874 | orchestrator | changed: [testbed-manager] 2026-04-09 02:07:42.487908 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:07:42.487912 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:07:42.487916 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:07:42.487920 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:07:42.487924 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:07:42.487928 | orchestrator | 2026-04-09 02:07:42.487934 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-09 02:07:42.487939 | orchestrator | Thursday 09 April 2026 02:05:27 +0000 (0:00:01.833) 0:01:15.205 ******** 2026-04-09 02:07:42.487943 | orchestrator | ok: [testbed-manager] 2026-04-09 02:07:42.487947 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:07:42.487950 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:07:42.487955 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:07:42.487959 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:07:42.487963 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:07:42.487966 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:07:42.487970 | orchestrator | 2026-04-09 02:07:42.487977 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-09 02:07:42.488007 | orchestrator | Thursday 09 April 2026 02:05:29 +0000 (0:00:02.598) 0:01:17.804 ******** 2026-04-09 02:07:42.488014 | orchestrator | ok: [testbed-manager] 2026-04-09 02:07:42.488020 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:07:42.488026 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:07:42.488032 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:07:42.488038 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:07:42.488044 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:07:42.488051 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:07:42.488055 | orchestrator | 2026-04-09 02:07:42.488059 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-09 02:07:42.488063 | orchestrator | Thursday 09 April 2026 02:06:05 +0000 (0:00:35.372) 0:01:53.176 ******** 2026-04-09 02:07:42.488067 | orchestrator | changed: [testbed-manager] 2026-04-09 02:07:42.488071 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:07:42.488075 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:07:42.488079 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:07:42.488083 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:07:42.488087 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:07:42.488091 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:07:42.488094 | orchestrator | 2026-04-09 02:07:42.488098 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-09 02:07:42.488102 | orchestrator | Thursday 09 April 2026 02:07:25 +0000 (0:01:19.831) 0:03:13.007 ******** 2026-04-09 02:07:42.488107 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:07:42.488110 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:07:42.488114 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:07:42.488118 | orchestrator | ok: [testbed-manager] 2026-04-09 02:07:42.488122 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:07:42.488125 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:07:42.488129 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:07:42.488133 | orchestrator | 2026-04-09 02:07:42.488137 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-09 02:07:42.488141 | orchestrator | Thursday 09 April 2026 02:07:26 +0000 (0:00:01.858) 0:03:14.866 ******** 2026-04-09 02:07:42.488145 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:07:42.488149 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:07:42.488152 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:07:42.488156 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:07:42.488160 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:07:42.488164 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:07:42.488167 | orchestrator | changed: [testbed-manager] 2026-04-09 02:07:42.488171 | orchestrator | 2026-04-09 02:07:42.488175 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-09 02:07:42.488179 | orchestrator | Thursday 09 April 2026 02:07:41 +0000 (0:00:14.151) 0:03:29.017 ******** 2026-04-09 02:07:42.488205 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-09 02:07:42.488223 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-09 02:07:42.488234 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-09 02:07:42.488239 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-09 02:07:42.488244 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-09 02:07:42.488248 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-09 02:07:42.488252 | orchestrator | 2026-04-09 02:07:42.488256 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-09 02:07:42.488260 | orchestrator | Thursday 09 April 2026 02:07:41 +0000 (0:00:00.477) 0:03:29.494 ******** 2026-04-09 02:07:42.488264 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-09 02:07:42.488268 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-09 02:07:42.488272 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:07:42.488276 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-09 02:07:42.488279 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:07:42.488286 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-09 02:07:42.488290 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:07:42.488294 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:07:42.488298 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 02:07:42.488302 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 02:07:42.488306 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 02:07:42.488310 | orchestrator | 2026-04-09 02:07:42.488313 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-09 02:07:42.488317 | orchestrator | Thursday 09 April 2026 02:07:42 +0000 (0:00:00.770) 0:03:30.264 ******** 2026-04-09 02:07:42.488321 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-09 02:07:42.488327 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-09 02:07:42.488332 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-09 02:07:42.488339 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-09 02:07:42.488344 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-09 02:07:42.488356 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-09 02:07:48.166713 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-09 02:07:48.166820 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-09 02:07:48.166863 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-09 02:07:48.166874 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-09 02:07:48.166914 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-09 02:07:48.166925 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-09 02:07:48.166936 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-09 02:07:48.166947 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-09 02:07:48.166957 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-09 02:07:48.166968 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:07:48.166980 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-09 02:07:48.166991 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-09 02:07:48.167002 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-09 02:07:48.167012 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-09 02:07:48.167023 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-09 02:07:48.167033 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-09 02:07:48.167044 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-09 02:07:48.167054 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:07:48.167065 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-09 02:07:48.167075 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-09 02:07:48.167086 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-09 02:07:48.167097 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-09 02:07:48.167107 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-09 02:07:48.167117 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-09 02:07:48.167126 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-09 02:07:48.167136 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-09 02:07:48.167146 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-09 02:07:48.167156 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-09 02:07:48.167166 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-09 02:07:48.167176 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-09 02:07:48.167202 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-09 02:07:48.167213 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:07:48.167224 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-09 02:07:48.167235 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-09 02:07:48.167246 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-09 02:07:48.167257 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-09 02:07:48.167277 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-09 02:07:48.167289 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:07:48.167301 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-09 02:07:48.167312 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-09 02:07:48.167323 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-09 02:07:48.167333 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-09 02:07:48.167345 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-09 02:07:48.167375 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-09 02:07:48.167384 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-09 02:07:48.167392 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-09 02:07:48.167399 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-09 02:07:48.167407 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-09 02:07:48.167414 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-09 02:07:48.167422 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-09 02:07:48.167429 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-09 02:07:48.167437 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-09 02:07:48.167444 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-09 02:07:48.167451 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-09 02:07:48.167459 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-09 02:07:48.167467 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-09 02:07:48.167474 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-09 02:07:48.167481 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-09 02:07:48.167489 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-09 02:07:48.167496 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-09 02:07:48.167504 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-09 02:07:48.167511 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-09 02:07:48.167518 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-09 02:07:48.167526 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-09 02:07:48.167533 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-09 02:07:48.167540 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-09 02:07:48.167548 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-09 02:07:48.167556 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-09 02:07:48.167569 | orchestrator | 2026-04-09 02:07:48.167578 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-09 02:07:48.167585 | orchestrator | Thursday 09 April 2026 02:07:47 +0000 (0:00:04.645) 0:03:34.910 ******** 2026-04-09 02:07:48.167593 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 02:07:48.167600 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 02:07:48.167608 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 02:07:48.167616 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 02:07:48.167628 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 02:07:48.167634 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 02:07:48.167641 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 02:07:48.167647 | orchestrator | 2026-04-09 02:07:48.167653 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-09 02:07:48.167659 | orchestrator | Thursday 09 April 2026 02:07:47 +0000 (0:00:00.617) 0:03:35.528 ******** 2026-04-09 02:07:48.167666 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 02:07:48.167672 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:07:48.167682 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 02:07:48.167692 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:07:48.167702 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 02:07:48.167715 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:07:48.167730 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 02:07:48.167740 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:07:48.167750 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-09 02:07:48.167760 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-09 02:07:48.167778 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-09 02:08:01.781693 | orchestrator | 2026-04-09 02:08:01.781760 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-09 02:08:01.781769 | orchestrator | Thursday 09 April 2026 02:07:48 +0000 (0:00:00.501) 0:03:36.029 ******** 2026-04-09 02:08:01.781775 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 02:08:01.781781 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:08:01.781787 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 02:08:01.781793 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 02:08:01.781799 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:08:01.781805 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 02:08:01.781810 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:08:01.781816 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:08:01.781822 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-09 02:08:01.781828 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-09 02:08:01.781833 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-09 02:08:01.781839 | orchestrator | 2026-04-09 02:08:01.781845 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-09 02:08:01.781866 | orchestrator | Thursday 09 April 2026 02:07:48 +0000 (0:00:00.640) 0:03:36.670 ******** 2026-04-09 02:08:01.781872 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-09 02:08:01.781878 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:08:01.781883 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-09 02:08:01.781936 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-09 02:08:01.781942 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:08:01.781948 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:08:01.781953 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-09 02:08:01.781959 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:08:01.781965 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-09 02:08:01.781971 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-09 02:08:01.781978 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-09 02:08:01.781985 | orchestrator | 2026-04-09 02:08:01.781991 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-09 02:08:01.781998 | orchestrator | Thursday 09 April 2026 02:07:49 +0000 (0:00:00.598) 0:03:37.268 ******** 2026-04-09 02:08:01.782005 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:08:01.782056 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:08:01.782063 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:08:01.782069 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:08:01.782076 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:08:01.782082 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:08:01.782088 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:08:01.782095 | orchestrator | 2026-04-09 02:08:01.782101 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-09 02:08:01.782108 | orchestrator | Thursday 09 April 2026 02:07:49 +0000 (0:00:00.341) 0:03:37.609 ******** 2026-04-09 02:08:01.782113 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:08:01.782120 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:08:01.782126 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:08:01.782131 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:08:01.782137 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:08:01.782142 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:08:01.782148 | orchestrator | ok: [testbed-manager] 2026-04-09 02:08:01.782154 | orchestrator | 2026-04-09 02:08:01.782160 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-09 02:08:01.782166 | orchestrator | Thursday 09 April 2026 02:07:55 +0000 (0:00:05.794) 0:03:43.404 ******** 2026-04-09 02:08:01.782173 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-09 02:08:01.782179 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:08:01.782185 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-09 02:08:01.782191 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-09 02:08:01.782197 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:08:01.782204 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-09 02:08:01.782210 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:08:01.782216 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:08:01.782221 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-09 02:08:01.782225 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-09 02:08:01.782239 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:08:01.782243 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:08:01.782247 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-09 02:08:01.782251 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:08:01.782255 | orchestrator | 2026-04-09 02:08:01.782268 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-09 02:08:01.782275 | orchestrator | Thursday 09 April 2026 02:07:55 +0000 (0:00:00.385) 0:03:43.790 ******** 2026-04-09 02:08:01.782282 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-09 02:08:01.782287 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-09 02:08:01.782291 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-09 02:08:01.782308 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-09 02:08:01.782313 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-09 02:08:01.782317 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-09 02:08:01.782322 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-09 02:08:01.782327 | orchestrator | 2026-04-09 02:08:01.782332 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-09 02:08:01.782336 | orchestrator | Thursday 09 April 2026 02:07:56 +0000 (0:00:01.024) 0:03:44.814 ******** 2026-04-09 02:08:01.782342 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:08:01.782349 | orchestrator | 2026-04-09 02:08:01.782353 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-09 02:08:01.782358 | orchestrator | Thursday 09 April 2026 02:07:57 +0000 (0:00:00.607) 0:03:45.422 ******** 2026-04-09 02:08:01.782362 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:08:01.782367 | orchestrator | ok: [testbed-manager] 2026-04-09 02:08:01.782371 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:08:01.782376 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:08:01.782380 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:08:01.782385 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:08:01.782392 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:08:01.782399 | orchestrator | 2026-04-09 02:08:01.782405 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-09 02:08:01.782412 | orchestrator | Thursday 09 April 2026 02:07:58 +0000 (0:00:01.251) 0:03:46.674 ******** 2026-04-09 02:08:01.782417 | orchestrator | ok: [testbed-manager] 2026-04-09 02:08:01.782421 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:08:01.782426 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:08:01.782430 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:08:01.782435 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:08:01.782440 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:08:01.782445 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:08:01.782449 | orchestrator | 2026-04-09 02:08:01.782454 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-09 02:08:01.782459 | orchestrator | Thursday 09 April 2026 02:07:59 +0000 (0:00:00.639) 0:03:47.314 ******** 2026-04-09 02:08:01.782463 | orchestrator | changed: [testbed-manager] 2026-04-09 02:08:01.782468 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:08:01.782473 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:08:01.782478 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:08:01.782482 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:08:01.782487 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:08:01.782491 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:08:01.782496 | orchestrator | 2026-04-09 02:08:01.782501 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-09 02:08:01.782506 | orchestrator | Thursday 09 April 2026 02:08:00 +0000 (0:00:00.646) 0:03:47.960 ******** 2026-04-09 02:08:01.782510 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:08:01.782515 | orchestrator | ok: [testbed-manager] 2026-04-09 02:08:01.782520 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:08:01.782524 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:08:01.782529 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:08:01.782533 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:08:01.782538 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:08:01.782542 | orchestrator | 2026-04-09 02:08:01.782547 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-09 02:08:01.782555 | orchestrator | Thursday 09 April 2026 02:08:00 +0000 (0:00:00.614) 0:03:48.574 ******** 2026-04-09 02:08:01.782564 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775698867.02667, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 02:08:01.782571 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775698888.9153454, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 02:08:01.782576 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775698897.0555637, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 02:08:01.782593 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775698884.8650482, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 02:08:06.881550 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775698891.1215072, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 02:08:06.881655 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775698895.6744459, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 02:08:06.881669 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775698897.820432, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 02:08:06.881710 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 02:08:06.881746 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 02:08:06.881765 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 02:08:06.881783 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 02:08:06.881829 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 02:08:06.881842 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 02:08:06.881852 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 02:08:06.881870 | orchestrator | 2026-04-09 02:08:06.881883 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-09 02:08:06.882085 | orchestrator | Thursday 09 April 2026 02:08:01 +0000 (0:00:01.074) 0:03:49.649 ******** 2026-04-09 02:08:06.882102 | orchestrator | changed: [testbed-manager] 2026-04-09 02:08:06.882116 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:08:06.882129 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:08:06.882141 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:08:06.882154 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:08:06.882164 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:08:06.882174 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:08:06.882184 | orchestrator | 2026-04-09 02:08:06.882194 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-09 02:08:06.882203 | orchestrator | Thursday 09 April 2026 02:08:02 +0000 (0:00:01.046) 0:03:50.695 ******** 2026-04-09 02:08:06.882213 | orchestrator | changed: [testbed-manager] 2026-04-09 02:08:06.882223 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:08:06.882232 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:08:06.882242 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:08:06.882251 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:08:06.882261 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:08:06.882271 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:08:06.882280 | orchestrator | 2026-04-09 02:08:06.882296 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-09 02:08:06.882306 | orchestrator | Thursday 09 April 2026 02:08:03 +0000 (0:00:01.172) 0:03:51.868 ******** 2026-04-09 02:08:06.882316 | orchestrator | changed: [testbed-manager] 2026-04-09 02:08:06.882326 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:08:06.882335 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:08:06.882345 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:08:06.882355 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:08:06.882364 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:08:06.882374 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:08:06.882383 | orchestrator | 2026-04-09 02:08:06.882393 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-09 02:08:06.882403 | orchestrator | Thursday 09 April 2026 02:08:05 +0000 (0:00:01.169) 0:03:53.037 ******** 2026-04-09 02:08:06.882413 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:08:06.882423 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:08:06.882432 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:08:06.882442 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:08:06.882451 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:08:06.882461 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:08:06.882470 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:08:06.882480 | orchestrator | 2026-04-09 02:08:06.882490 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-09 02:08:06.882499 | orchestrator | Thursday 09 April 2026 02:08:05 +0000 (0:00:00.370) 0:03:53.408 ******** 2026-04-09 02:08:06.882509 | orchestrator | ok: [testbed-manager] 2026-04-09 02:08:06.882520 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:08:06.882530 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:08:06.882539 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:08:06.882549 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:08:06.882559 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:08:06.882568 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:08:06.882578 | orchestrator | 2026-04-09 02:08:06.882588 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-09 02:08:06.882598 | orchestrator | Thursday 09 April 2026 02:08:06 +0000 (0:00:00.887) 0:03:54.296 ******** 2026-04-09 02:08:06.882610 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:08:06.882630 | orchestrator | 2026-04-09 02:08:06.882640 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-09 02:08:06.882660 | orchestrator | Thursday 09 April 2026 02:08:06 +0000 (0:00:00.454) 0:03:54.751 ******** 2026-04-09 02:09:26.430658 | orchestrator | ok: [testbed-manager] 2026-04-09 02:09:26.430778 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:09:26.430806 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:09:26.430827 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:09:26.430846 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:09:26.430864 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:09:26.430885 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:09:26.430904 | orchestrator | 2026-04-09 02:09:26.430927 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-09 02:09:26.431037 | orchestrator | Thursday 09 April 2026 02:08:15 +0000 (0:00:08.411) 0:04:03.162 ******** 2026-04-09 02:09:26.431049 | orchestrator | ok: [testbed-manager] 2026-04-09 02:09:26.431061 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:09:26.431072 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:09:26.431083 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:09:26.431094 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:09:26.431105 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:09:26.431116 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:09:26.431127 | orchestrator | 2026-04-09 02:09:26.431139 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-09 02:09:26.431150 | orchestrator | Thursday 09 April 2026 02:08:16 +0000 (0:00:01.264) 0:04:04.426 ******** 2026-04-09 02:09:26.431161 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:09:26.431175 | orchestrator | ok: [testbed-manager] 2026-04-09 02:09:26.431194 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:09:26.431212 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:09:26.431231 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:09:26.431249 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:09:26.431265 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:09:26.431282 | orchestrator | 2026-04-09 02:09:26.431301 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-09 02:09:26.431321 | orchestrator | Thursday 09 April 2026 02:08:17 +0000 (0:00:01.287) 0:04:05.714 ******** 2026-04-09 02:09:26.431340 | orchestrator | ok: [testbed-manager] 2026-04-09 02:09:26.431358 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:09:26.431376 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:09:26.431394 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:09:26.431411 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:09:26.431429 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:09:26.431447 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:09:26.431466 | orchestrator | 2026-04-09 02:09:26.431486 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-09 02:09:26.431507 | orchestrator | Thursday 09 April 2026 02:08:18 +0000 (0:00:00.337) 0:04:06.051 ******** 2026-04-09 02:09:26.431525 | orchestrator | ok: [testbed-manager] 2026-04-09 02:09:26.431546 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:09:26.431565 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:09:26.431583 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:09:26.431600 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:09:26.431619 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:09:26.431638 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:09:26.431656 | orchestrator | 2026-04-09 02:09:26.431676 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-09 02:09:26.431695 | orchestrator | Thursday 09 April 2026 02:08:18 +0000 (0:00:00.328) 0:04:06.379 ******** 2026-04-09 02:09:26.431713 | orchestrator | ok: [testbed-manager] 2026-04-09 02:09:26.431734 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:09:26.431745 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:09:26.431790 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:09:26.431802 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:09:26.431813 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:09:26.431824 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:09:26.431835 | orchestrator | 2026-04-09 02:09:26.431846 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-09 02:09:26.431858 | orchestrator | Thursday 09 April 2026 02:08:18 +0000 (0:00:00.352) 0:04:06.732 ******** 2026-04-09 02:09:26.431869 | orchestrator | ok: [testbed-manager] 2026-04-09 02:09:26.431880 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:09:26.431891 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:09:26.431902 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:09:26.431913 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:09:26.431923 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:09:26.432033 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:09:26.432048 | orchestrator | 2026-04-09 02:09:26.432059 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-09 02:09:26.432070 | orchestrator | Thursday 09 April 2026 02:08:24 +0000 (0:00:05.840) 0:04:12.573 ******** 2026-04-09 02:09:26.432084 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:09:26.432099 | orchestrator | 2026-04-09 02:09:26.432118 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-09 02:09:26.432135 | orchestrator | Thursday 09 April 2026 02:08:25 +0000 (0:00:00.436) 0:04:13.009 ******** 2026-04-09 02:09:26.432152 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-09 02:09:26.432169 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-09 02:09:26.432186 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-09 02:09:26.432205 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-09 02:09:26.432223 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:09:26.432265 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-09 02:09:26.432284 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-09 02:09:26.432302 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:09:26.432321 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-09 02:09:26.432339 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-09 02:09:26.432357 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:09:26.432376 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:09:26.432394 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-09 02:09:26.432412 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-09 02:09:26.432431 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-09 02:09:26.432449 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-09 02:09:26.432499 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:09:26.432517 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:09:26.432532 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-09 02:09:26.432547 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-09 02:09:26.432564 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:09:26.432581 | orchestrator | 2026-04-09 02:09:26.432598 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-09 02:09:26.432614 | orchestrator | Thursday 09 April 2026 02:08:25 +0000 (0:00:00.419) 0:04:13.429 ******** 2026-04-09 02:09:26.432631 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:09:26.432649 | orchestrator | 2026-04-09 02:09:26.432659 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-09 02:09:26.432686 | orchestrator | Thursday 09 April 2026 02:08:26 +0000 (0:00:00.454) 0:04:13.883 ******** 2026-04-09 02:09:26.432702 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-09 02:09:26.432719 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-09 02:09:26.432735 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:09:26.432751 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-09 02:09:26.432765 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:09:26.432775 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:09:26.432785 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-09 02:09:26.432795 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-09 02:09:26.432804 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:09:26.432814 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:09:26.432824 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-09 02:09:26.432834 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:09:26.432844 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-09 02:09:26.432854 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:09:26.432863 | orchestrator | 2026-04-09 02:09:26.432874 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-09 02:09:26.432884 | orchestrator | Thursday 09 April 2026 02:08:26 +0000 (0:00:00.359) 0:04:14.243 ******** 2026-04-09 02:09:26.432894 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:09:26.432905 | orchestrator | 2026-04-09 02:09:26.432914 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-09 02:09:26.432924 | orchestrator | Thursday 09 April 2026 02:08:26 +0000 (0:00:00.504) 0:04:14.748 ******** 2026-04-09 02:09:26.432964 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:09:26.432975 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:09:26.432985 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:09:26.432995 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:09:26.433013 | orchestrator | changed: [testbed-manager] 2026-04-09 02:09:26.433023 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:09:26.433033 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:09:26.433043 | orchestrator | 2026-04-09 02:09:26.433053 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-09 02:09:26.433062 | orchestrator | Thursday 09 April 2026 02:09:02 +0000 (0:00:35.236) 0:04:49.984 ******** 2026-04-09 02:09:26.433072 | orchestrator | changed: [testbed-manager] 2026-04-09 02:09:26.433082 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:09:26.433091 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:09:26.433101 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:09:26.433110 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:09:26.433120 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:09:26.433130 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:09:26.433139 | orchestrator | 2026-04-09 02:09:26.433149 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-09 02:09:26.433159 | orchestrator | Thursday 09 April 2026 02:09:10 +0000 (0:00:08.522) 0:04:58.507 ******** 2026-04-09 02:09:26.433169 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:09:26.433179 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:09:26.433189 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:09:26.433198 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:09:26.433208 | orchestrator | changed: [testbed-manager] 2026-04-09 02:09:26.433217 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:09:26.433227 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:09:26.433236 | orchestrator | 2026-04-09 02:09:26.433252 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-09 02:09:26.433279 | orchestrator | Thursday 09 April 2026 02:09:18 +0000 (0:00:07.888) 0:05:06.395 ******** 2026-04-09 02:09:26.433295 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:09:26.433311 | orchestrator | ok: [testbed-manager] 2026-04-09 02:09:26.433329 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:09:26.433346 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:09:26.433362 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:09:26.433375 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:09:26.433385 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:09:26.433395 | orchestrator | 2026-04-09 02:09:26.433405 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-09 02:09:26.433416 | orchestrator | Thursday 09 April 2026 02:09:20 +0000 (0:00:01.797) 0:05:08.193 ******** 2026-04-09 02:09:26.433426 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:09:26.433436 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:09:26.433452 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:09:26.433466 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:09:26.433479 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:09:26.433493 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:09:26.433510 | orchestrator | changed: [testbed-manager] 2026-04-09 02:09:26.433537 | orchestrator | 2026-04-09 02:09:26.433569 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-09 02:09:38.417196 | orchestrator | Thursday 09 April 2026 02:09:26 +0000 (0:00:06.101) 0:05:14.295 ******** 2026-04-09 02:09:38.417327 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:09:38.417354 | orchestrator | 2026-04-09 02:09:38.417374 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-09 02:09:38.417391 | orchestrator | Thursday 09 April 2026 02:09:27 +0000 (0:00:00.622) 0:05:14.917 ******** 2026-04-09 02:09:38.417407 | orchestrator | changed: [testbed-manager] 2026-04-09 02:09:38.417423 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:09:38.417439 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:09:38.417455 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:09:38.417472 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:09:38.417488 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:09:38.417505 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:09:38.417522 | orchestrator | 2026-04-09 02:09:38.417537 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-09 02:09:38.417553 | orchestrator | Thursday 09 April 2026 02:09:27 +0000 (0:00:00.759) 0:05:15.676 ******** 2026-04-09 02:09:38.417568 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:09:38.417586 | orchestrator | ok: [testbed-manager] 2026-04-09 02:09:38.417602 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:09:38.417620 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:09:38.417637 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:09:38.417655 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:09:38.417672 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:09:38.417690 | orchestrator | 2026-04-09 02:09:38.417708 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-09 02:09:38.417726 | orchestrator | Thursday 09 April 2026 02:09:29 +0000 (0:00:01.760) 0:05:17.437 ******** 2026-04-09 02:09:38.417746 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:09:38.417767 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:09:38.417787 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:09:38.417806 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:09:38.417823 | orchestrator | changed: [testbed-manager] 2026-04-09 02:09:38.417841 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:09:38.417859 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:09:38.417878 | orchestrator | 2026-04-09 02:09:38.417896 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-09 02:09:38.417913 | orchestrator | Thursday 09 April 2026 02:09:30 +0000 (0:00:00.811) 0:05:18.249 ******** 2026-04-09 02:09:38.417993 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:09:38.418013 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:09:38.418107 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:09:38.418127 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:09:38.418145 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:09:38.418164 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:09:38.418182 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:09:38.418214 | orchestrator | 2026-04-09 02:09:38.418233 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-09 02:09:38.418252 | orchestrator | Thursday 09 April 2026 02:09:30 +0000 (0:00:00.307) 0:05:18.556 ******** 2026-04-09 02:09:38.418270 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:09:38.418287 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:09:38.418305 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:09:38.418339 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:09:38.418357 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:09:38.418374 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:09:38.418390 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:09:38.418406 | orchestrator | 2026-04-09 02:09:38.418421 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-09 02:09:38.418436 | orchestrator | Thursday 09 April 2026 02:09:31 +0000 (0:00:00.442) 0:05:18.999 ******** 2026-04-09 02:09:38.418451 | orchestrator | ok: [testbed-manager] 2026-04-09 02:09:38.418467 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:09:38.418483 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:09:38.418497 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:09:38.418513 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:09:38.418528 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:09:38.418543 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:09:38.418558 | orchestrator | 2026-04-09 02:09:38.418573 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-09 02:09:38.418590 | orchestrator | Thursday 09 April 2026 02:09:31 +0000 (0:00:00.336) 0:05:19.336 ******** 2026-04-09 02:09:38.418606 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:09:38.418622 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:09:38.418637 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:09:38.418653 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:09:38.418669 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:09:38.418687 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:09:38.418704 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:09:38.418721 | orchestrator | 2026-04-09 02:09:38.418770 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-09 02:09:38.418789 | orchestrator | Thursday 09 April 2026 02:09:31 +0000 (0:00:00.319) 0:05:19.655 ******** 2026-04-09 02:09:38.418803 | orchestrator | ok: [testbed-manager] 2026-04-09 02:09:38.418818 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:09:38.418849 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:09:38.418866 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:09:38.418882 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:09:38.418897 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:09:38.418912 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:09:38.418928 | orchestrator | 2026-04-09 02:09:38.418971 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-09 02:09:38.418985 | orchestrator | Thursday 09 April 2026 02:09:32 +0000 (0:00:00.355) 0:05:20.010 ******** 2026-04-09 02:09:38.419002 | orchestrator | ok: [testbed-manager] =>  2026-04-09 02:09:38.419019 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 02:09:38.419035 | orchestrator | ok: [testbed-node-3] =>  2026-04-09 02:09:38.419054 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 02:09:38.419071 | orchestrator | ok: [testbed-node-4] =>  2026-04-09 02:09:38.419088 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 02:09:38.419105 | orchestrator | ok: [testbed-node-5] =>  2026-04-09 02:09:38.419118 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 02:09:38.419156 | orchestrator | ok: [testbed-node-0] =>  2026-04-09 02:09:38.419183 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 02:09:38.419193 | orchestrator | ok: [testbed-node-1] =>  2026-04-09 02:09:38.419203 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 02:09:38.419213 | orchestrator | ok: [testbed-node-2] =>  2026-04-09 02:09:38.419222 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 02:09:38.419232 | orchestrator | 2026-04-09 02:09:38.419242 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-09 02:09:38.419252 | orchestrator | Thursday 09 April 2026 02:09:32 +0000 (0:00:00.348) 0:05:20.358 ******** 2026-04-09 02:09:38.419262 | orchestrator | ok: [testbed-manager] =>  2026-04-09 02:09:38.419271 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 02:09:38.419279 | orchestrator | ok: [testbed-node-3] =>  2026-04-09 02:09:38.419286 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 02:09:38.419294 | orchestrator | ok: [testbed-node-4] =>  2026-04-09 02:09:38.419302 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 02:09:38.419310 | orchestrator | ok: [testbed-node-5] =>  2026-04-09 02:09:38.419318 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 02:09:38.419326 | orchestrator | ok: [testbed-node-0] =>  2026-04-09 02:09:38.419334 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 02:09:38.419342 | orchestrator | ok: [testbed-node-1] =>  2026-04-09 02:09:38.419350 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 02:09:38.419358 | orchestrator | ok: [testbed-node-2] =>  2026-04-09 02:09:38.419366 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 02:09:38.419374 | orchestrator | 2026-04-09 02:09:38.419382 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-09 02:09:38.419390 | orchestrator | Thursday 09 April 2026 02:09:32 +0000 (0:00:00.354) 0:05:20.713 ******** 2026-04-09 02:09:38.419398 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:09:38.419406 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:09:38.419413 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:09:38.419421 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:09:38.419429 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:09:38.419437 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:09:38.419445 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:09:38.419452 | orchestrator | 2026-04-09 02:09:38.419461 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-09 02:09:38.419468 | orchestrator | Thursday 09 April 2026 02:09:33 +0000 (0:00:00.322) 0:05:21.036 ******** 2026-04-09 02:09:38.419476 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:09:38.419484 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:09:38.419492 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:09:38.419500 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:09:38.419508 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:09:38.419516 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:09:38.419523 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:09:38.419531 | orchestrator | 2026-04-09 02:09:38.419539 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-09 02:09:38.419547 | orchestrator | Thursday 09 April 2026 02:09:33 +0000 (0:00:00.323) 0:05:21.359 ******** 2026-04-09 02:09:38.419557 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:09:38.419568 | orchestrator | 2026-04-09 02:09:38.419583 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-09 02:09:38.419591 | orchestrator | Thursday 09 April 2026 02:09:33 +0000 (0:00:00.493) 0:05:21.852 ******** 2026-04-09 02:09:38.419599 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:09:38.419607 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:09:38.419615 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:09:38.419623 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:09:38.419631 | orchestrator | ok: [testbed-manager] 2026-04-09 02:09:38.419643 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:09:38.419651 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:09:38.419659 | orchestrator | 2026-04-09 02:09:38.419667 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-09 02:09:38.419675 | orchestrator | Thursday 09 April 2026 02:09:35 +0000 (0:00:01.125) 0:05:22.977 ******** 2026-04-09 02:09:38.419683 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:09:38.419691 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:09:38.419699 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:09:38.419707 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:09:38.419714 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:09:38.419722 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:09:38.419730 | orchestrator | ok: [testbed-manager] 2026-04-09 02:09:38.419738 | orchestrator | 2026-04-09 02:09:38.419746 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-09 02:09:38.419755 | orchestrator | Thursday 09 April 2026 02:09:37 +0000 (0:00:02.887) 0:05:25.865 ******** 2026-04-09 02:09:38.419763 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-09 02:09:38.419772 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-09 02:09:38.419780 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-09 02:09:38.419788 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-09 02:09:38.419796 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-09 02:09:38.419804 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-09 02:09:38.419812 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:09:38.419820 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:09:38.419827 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-09 02:09:38.419835 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-09 02:09:38.419843 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-09 02:09:38.419851 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-09 02:09:38.419859 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-09 02:09:38.419866 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-09 02:09:38.419874 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:09:38.419882 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-09 02:09:38.419895 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-09 02:10:35.931031 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-09 02:10:35.931143 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:10:35.931160 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-09 02:10:35.931170 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-09 02:10:35.931180 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-09 02:10:35.931190 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:10:35.931200 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:10:35.931210 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-09 02:10:35.931221 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-09 02:10:35.931232 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-09 02:10:35.931243 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:10:35.931254 | orchestrator | 2026-04-09 02:10:35.931266 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-09 02:10:35.931279 | orchestrator | Thursday 09 April 2026 02:09:38 +0000 (0:00:00.655) 0:05:26.520 ******** 2026-04-09 02:10:35.931290 | orchestrator | ok: [testbed-manager] 2026-04-09 02:10:35.931301 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:10:35.931312 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:10:35.931323 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:10:35.931335 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:10:35.931346 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:10:35.931380 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:10:35.931390 | orchestrator | 2026-04-09 02:10:35.931399 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-09 02:10:35.931409 | orchestrator | Thursday 09 April 2026 02:09:44 +0000 (0:00:05.576) 0:05:32.097 ******** 2026-04-09 02:10:35.931418 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:10:35.931427 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:10:35.931436 | orchestrator | ok: [testbed-manager] 2026-04-09 02:10:35.931446 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:10:35.931455 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:10:35.931465 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:10:35.931474 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:10:35.931484 | orchestrator | 2026-04-09 02:10:35.931493 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-09 02:10:35.931503 | orchestrator | Thursday 09 April 2026 02:09:45 +0000 (0:00:01.091) 0:05:33.189 ******** 2026-04-09 02:10:35.931512 | orchestrator | ok: [testbed-manager] 2026-04-09 02:10:35.931522 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:10:35.931532 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:10:35.931543 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:10:35.931552 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:10:35.931563 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:10:35.931574 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:10:35.931584 | orchestrator | 2026-04-09 02:10:35.931594 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-09 02:10:35.931601 | orchestrator | Thursday 09 April 2026 02:09:53 +0000 (0:00:08.625) 0:05:41.814 ******** 2026-04-09 02:10:35.931608 | orchestrator | changed: [testbed-manager] 2026-04-09 02:10:35.931615 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:10:35.931622 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:10:35.931629 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:10:35.931636 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:10:35.931643 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:10:35.931650 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:10:35.931657 | orchestrator | 2026-04-09 02:10:35.931664 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-09 02:10:35.931672 | orchestrator | Thursday 09 April 2026 02:09:57 +0000 (0:00:03.352) 0:05:45.166 ******** 2026-04-09 02:10:35.931678 | orchestrator | ok: [testbed-manager] 2026-04-09 02:10:35.931685 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:10:35.931692 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:10:35.931699 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:10:35.931706 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:10:35.931712 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:10:35.931719 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:10:35.931726 | orchestrator | 2026-04-09 02:10:35.931733 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-09 02:10:35.931740 | orchestrator | Thursday 09 April 2026 02:09:58 +0000 (0:00:01.353) 0:05:46.520 ******** 2026-04-09 02:10:35.931747 | orchestrator | ok: [testbed-manager] 2026-04-09 02:10:35.931754 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:10:35.931761 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:10:35.931768 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:10:35.931775 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:10:35.931781 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:10:35.931787 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:10:35.931793 | orchestrator | 2026-04-09 02:10:35.931799 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-09 02:10:35.931805 | orchestrator | Thursday 09 April 2026 02:10:00 +0000 (0:00:01.525) 0:05:48.046 ******** 2026-04-09 02:10:35.931811 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:10:35.931817 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:10:35.931822 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:10:35.931828 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:10:35.931842 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:10:35.931847 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:10:35.931853 | orchestrator | changed: [testbed-manager] 2026-04-09 02:10:35.931859 | orchestrator | 2026-04-09 02:10:35.931865 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-09 02:10:35.931871 | orchestrator | Thursday 09 April 2026 02:10:00 +0000 (0:00:00.698) 0:05:48.744 ******** 2026-04-09 02:10:35.931876 | orchestrator | ok: [testbed-manager] 2026-04-09 02:10:35.931882 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:10:35.931888 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:10:35.931894 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:10:35.931899 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:10:35.931905 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:10:35.931911 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:10:35.931917 | orchestrator | 2026-04-09 02:10:35.931923 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-09 02:10:35.931945 | orchestrator | Thursday 09 April 2026 02:10:08 +0000 (0:00:07.998) 0:05:56.742 ******** 2026-04-09 02:10:35.931951 | orchestrator | changed: [testbed-manager] 2026-04-09 02:10:35.931957 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:10:35.931963 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:10:35.932125 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:10:35.932132 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:10:35.932137 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:10:35.932143 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:10:35.932149 | orchestrator | 2026-04-09 02:10:35.932156 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-09 02:10:35.932162 | orchestrator | Thursday 09 April 2026 02:10:09 +0000 (0:00:00.874) 0:05:57.617 ******** 2026-04-09 02:10:35.932167 | orchestrator | ok: [testbed-manager] 2026-04-09 02:10:35.932173 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:10:35.932179 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:10:35.932185 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:10:35.932190 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:10:35.932196 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:10:35.932202 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:10:35.932207 | orchestrator | 2026-04-09 02:10:35.932213 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-09 02:10:35.932219 | orchestrator | Thursday 09 April 2026 02:10:18 +0000 (0:00:08.446) 0:06:06.063 ******** 2026-04-09 02:10:35.932224 | orchestrator | ok: [testbed-manager] 2026-04-09 02:10:35.932230 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:10:35.932236 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:10:35.932241 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:10:35.932247 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:10:35.932253 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:10:35.932258 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:10:35.932264 | orchestrator | 2026-04-09 02:10:35.932270 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-09 02:10:35.932276 | orchestrator | Thursday 09 April 2026 02:10:29 +0000 (0:00:10.849) 0:06:16.913 ******** 2026-04-09 02:10:35.932281 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-09 02:10:35.932287 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-09 02:10:35.932293 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-09 02:10:35.932299 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-09 02:10:35.932304 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-09 02:10:35.932310 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-09 02:10:35.932316 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-09 02:10:35.932322 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-09 02:10:35.932327 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-09 02:10:35.932341 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-09 02:10:35.932346 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-09 02:10:35.932387 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-09 02:10:35.932393 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-09 02:10:35.932399 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-09 02:10:35.932405 | orchestrator | 2026-04-09 02:10:35.932411 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-09 02:10:35.932417 | orchestrator | Thursday 09 April 2026 02:10:30 +0000 (0:00:01.195) 0:06:18.108 ******** 2026-04-09 02:10:35.932425 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:10:35.932431 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:10:35.932437 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:10:35.932443 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:10:35.932449 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:10:35.932454 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:10:35.932460 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:10:35.932466 | orchestrator | 2026-04-09 02:10:35.932472 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-09 02:10:35.932477 | orchestrator | Thursday 09 April 2026 02:10:30 +0000 (0:00:00.601) 0:06:18.710 ******** 2026-04-09 02:10:35.932483 | orchestrator | ok: [testbed-manager] 2026-04-09 02:10:35.932489 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:10:35.932495 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:10:35.932500 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:10:35.932506 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:10:35.932512 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:10:35.932518 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:10:35.932524 | orchestrator | 2026-04-09 02:10:35.932529 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-09 02:10:35.932536 | orchestrator | Thursday 09 April 2026 02:10:34 +0000 (0:00:03.993) 0:06:22.703 ******** 2026-04-09 02:10:35.932542 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:10:35.932548 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:10:35.932554 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:10:35.932560 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:10:35.932565 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:10:35.932571 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:10:35.932577 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:10:35.932582 | orchestrator | 2026-04-09 02:10:35.932589 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-09 02:10:35.932595 | orchestrator | Thursday 09 April 2026 02:10:35 +0000 (0:00:00.567) 0:06:23.271 ******** 2026-04-09 02:10:35.932601 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-09 02:10:35.932607 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-09 02:10:35.932613 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:10:35.932618 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-09 02:10:35.932624 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-09 02:10:35.932630 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:10:35.932636 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-09 02:10:35.932641 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-09 02:10:35.932647 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:10:35.932662 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-09 02:10:56.331902 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-09 02:10:56.332072 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:10:56.332090 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-09 02:10:56.332103 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-09 02:10:56.332115 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:10:56.332154 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-09 02:10:56.332166 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-09 02:10:56.332178 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:10:56.332189 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-09 02:10:56.332200 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-09 02:10:56.332211 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:10:56.332223 | orchestrator | 2026-04-09 02:10:56.332237 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-09 02:10:56.332249 | orchestrator | Thursday 09 April 2026 02:10:36 +0000 (0:00:00.795) 0:06:24.067 ******** 2026-04-09 02:10:56.332260 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:10:56.332272 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:10:56.332283 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:10:56.332293 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:10:56.332304 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:10:56.332315 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:10:56.332326 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:10:56.332337 | orchestrator | 2026-04-09 02:10:56.332349 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-09 02:10:56.332360 | orchestrator | Thursday 09 April 2026 02:10:36 +0000 (0:00:00.555) 0:06:24.622 ******** 2026-04-09 02:10:56.332371 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:10:56.332382 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:10:56.332393 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:10:56.332404 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:10:56.332415 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:10:56.332426 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:10:56.332437 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:10:56.332450 | orchestrator | 2026-04-09 02:10:56.332464 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-09 02:10:56.332477 | orchestrator | Thursday 09 April 2026 02:10:37 +0000 (0:00:00.592) 0:06:25.214 ******** 2026-04-09 02:10:56.332490 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:10:56.332502 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:10:56.332515 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:10:56.332528 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:10:56.332540 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:10:56.332553 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:10:56.332566 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:10:56.332579 | orchestrator | 2026-04-09 02:10:56.332592 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-09 02:10:56.332605 | orchestrator | Thursday 09 April 2026 02:10:37 +0000 (0:00:00.611) 0:06:25.825 ******** 2026-04-09 02:10:56.332618 | orchestrator | ok: [testbed-manager] 2026-04-09 02:10:56.332632 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:10:56.332645 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:10:56.332658 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:10:56.332671 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:10:56.332684 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:10:56.332697 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:10:56.332710 | orchestrator | 2026-04-09 02:10:56.332721 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-09 02:10:56.332732 | orchestrator | Thursday 09 April 2026 02:10:39 +0000 (0:00:01.955) 0:06:27.781 ******** 2026-04-09 02:10:56.332744 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:10:56.332757 | orchestrator | 2026-04-09 02:10:56.332769 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-09 02:10:56.332780 | orchestrator | Thursday 09 April 2026 02:10:40 +0000 (0:00:00.950) 0:06:28.731 ******** 2026-04-09 02:10:56.332806 | orchestrator | ok: [testbed-manager] 2026-04-09 02:10:56.332818 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:10:56.332829 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:10:56.332840 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:10:56.332851 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:10:56.332862 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:10:56.332873 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:10:56.332884 | orchestrator | 2026-04-09 02:10:56.332895 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-09 02:10:56.332906 | orchestrator | Thursday 09 April 2026 02:10:41 +0000 (0:00:00.879) 0:06:29.610 ******** 2026-04-09 02:10:56.332917 | orchestrator | ok: [testbed-manager] 2026-04-09 02:10:56.332928 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:10:56.332939 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:10:56.332950 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:10:56.332961 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:10:56.332972 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:10:56.333003 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:10:56.333014 | orchestrator | 2026-04-09 02:10:56.333025 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-09 02:10:56.333036 | orchestrator | Thursday 09 April 2026 02:10:42 +0000 (0:00:00.945) 0:06:30.556 ******** 2026-04-09 02:10:56.333047 | orchestrator | ok: [testbed-manager] 2026-04-09 02:10:56.333058 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:10:56.333069 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:10:56.333080 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:10:56.333090 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:10:56.333101 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:10:56.333112 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:10:56.333123 | orchestrator | 2026-04-09 02:10:56.333134 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-09 02:10:56.333162 | orchestrator | Thursday 09 April 2026 02:10:44 +0000 (0:00:01.562) 0:06:32.118 ******** 2026-04-09 02:10:56.333174 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:10:56.333185 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:10:56.333196 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:10:56.333207 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:10:56.333218 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:10:56.333229 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:10:56.333240 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:10:56.333251 | orchestrator | 2026-04-09 02:10:56.333262 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-09 02:10:56.333273 | orchestrator | Thursday 09 April 2026 02:10:45 +0000 (0:00:01.349) 0:06:33.468 ******** 2026-04-09 02:10:56.333284 | orchestrator | ok: [testbed-manager] 2026-04-09 02:10:56.333295 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:10:56.333306 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:10:56.333317 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:10:56.333328 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:10:56.333339 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:10:56.333350 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:10:56.333361 | orchestrator | 2026-04-09 02:10:56.333372 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-09 02:10:56.333383 | orchestrator | Thursday 09 April 2026 02:10:46 +0000 (0:00:01.368) 0:06:34.837 ******** 2026-04-09 02:10:56.333394 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:10:56.333405 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:10:56.333415 | orchestrator | changed: [testbed-manager] 2026-04-09 02:10:56.333426 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:10:56.333437 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:10:56.333448 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:10:56.333459 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:10:56.333470 | orchestrator | 2026-04-09 02:10:56.333488 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-09 02:10:56.333500 | orchestrator | Thursday 09 April 2026 02:10:48 +0000 (0:00:01.573) 0:06:36.410 ******** 2026-04-09 02:10:56.333511 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:10:56.333522 | orchestrator | 2026-04-09 02:10:56.333533 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-09 02:10:56.333544 | orchestrator | Thursday 09 April 2026 02:10:49 +0000 (0:00:01.172) 0:06:37.583 ******** 2026-04-09 02:10:56.333555 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:10:56.333567 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:10:56.333577 | orchestrator | ok: [testbed-manager] 2026-04-09 02:10:56.333589 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:10:56.333599 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:10:56.333610 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:10:56.333621 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:10:56.333632 | orchestrator | 2026-04-09 02:10:56.333643 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-09 02:10:56.333654 | orchestrator | Thursday 09 April 2026 02:10:51 +0000 (0:00:01.364) 0:06:38.948 ******** 2026-04-09 02:10:56.333665 | orchestrator | ok: [testbed-manager] 2026-04-09 02:10:56.333676 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:10:56.333687 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:10:56.333698 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:10:56.333709 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:10:56.333735 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:10:56.333747 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:10:56.333758 | orchestrator | 2026-04-09 02:10:56.333769 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-09 02:10:56.333780 | orchestrator | Thursday 09 April 2026 02:10:52 +0000 (0:00:01.216) 0:06:40.165 ******** 2026-04-09 02:10:56.333791 | orchestrator | ok: [testbed-manager] 2026-04-09 02:10:56.333802 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:10:56.333813 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:10:56.333824 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:10:56.333835 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:10:56.333846 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:10:56.333857 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:10:56.333868 | orchestrator | 2026-04-09 02:10:56.333879 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-09 02:10:56.333890 | orchestrator | Thursday 09 April 2026 02:10:53 +0000 (0:00:01.161) 0:06:41.326 ******** 2026-04-09 02:10:56.333901 | orchestrator | ok: [testbed-manager] 2026-04-09 02:10:56.333912 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:10:56.333923 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:10:56.333934 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:10:56.333949 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:10:56.333968 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:10:56.334161 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:10:56.334182 | orchestrator | 2026-04-09 02:10:56.334200 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-09 02:10:56.334218 | orchestrator | Thursday 09 April 2026 02:10:54 +0000 (0:00:01.504) 0:06:42.831 ******** 2026-04-09 02:10:56.334238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:10:56.334257 | orchestrator | 2026-04-09 02:10:56.334275 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 02:10:56.334294 | orchestrator | Thursday 09 April 2026 02:10:55 +0000 (0:00:00.996) 0:06:43.827 ******** 2026-04-09 02:10:56.334306 | orchestrator | 2026-04-09 02:10:56.334318 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 02:10:56.334340 | orchestrator | Thursday 09 April 2026 02:10:55 +0000 (0:00:00.041) 0:06:43.869 ******** 2026-04-09 02:10:56.334351 | orchestrator | 2026-04-09 02:10:56.334362 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 02:10:56.334373 | orchestrator | Thursday 09 April 2026 02:10:56 +0000 (0:00:00.055) 0:06:43.924 ******** 2026-04-09 02:10:56.334384 | orchestrator | 2026-04-09 02:10:56.334395 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 02:10:56.334418 | orchestrator | Thursday 09 April 2026 02:10:56 +0000 (0:00:00.064) 0:06:43.988 ******** 2026-04-09 02:11:22.544271 | orchestrator | 2026-04-09 02:11:22.544394 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 02:11:22.544420 | orchestrator | Thursday 09 April 2026 02:10:56 +0000 (0:00:00.047) 0:06:44.035 ******** 2026-04-09 02:11:22.544437 | orchestrator | 2026-04-09 02:11:22.544453 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 02:11:22.544468 | orchestrator | Thursday 09 April 2026 02:10:56 +0000 (0:00:00.046) 0:06:44.082 ******** 2026-04-09 02:11:22.544483 | orchestrator | 2026-04-09 02:11:22.544499 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 02:11:22.544509 | orchestrator | Thursday 09 April 2026 02:10:56 +0000 (0:00:00.056) 0:06:44.139 ******** 2026-04-09 02:11:22.544518 | orchestrator | 2026-04-09 02:11:22.544527 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-09 02:11:22.544536 | orchestrator | Thursday 09 April 2026 02:10:56 +0000 (0:00:00.047) 0:06:44.186 ******** 2026-04-09 02:11:22.544545 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:11:22.544555 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:11:22.544564 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:11:22.544573 | orchestrator | 2026-04-09 02:11:22.544581 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-09 02:11:22.544590 | orchestrator | Thursday 09 April 2026 02:10:57 +0000 (0:00:01.156) 0:06:45.342 ******** 2026-04-09 02:11:22.544599 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:11:22.544608 | orchestrator | changed: [testbed-manager] 2026-04-09 02:11:22.544617 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:11:22.544626 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:11:22.544634 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:11:22.544643 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:11:22.544651 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:11:22.544659 | orchestrator | 2026-04-09 02:11:22.544668 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-09 02:11:22.544677 | orchestrator | Thursday 09 April 2026 02:10:59 +0000 (0:00:01.586) 0:06:46.929 ******** 2026-04-09 02:11:22.544685 | orchestrator | changed: [testbed-manager] 2026-04-09 02:11:22.544694 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:11:22.544703 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:11:22.544711 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:11:22.544720 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:11:22.544728 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:11:22.544736 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:11:22.544745 | orchestrator | 2026-04-09 02:11:22.544754 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-09 02:11:22.544762 | orchestrator | Thursday 09 April 2026 02:11:00 +0000 (0:00:01.223) 0:06:48.152 ******** 2026-04-09 02:11:22.544771 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:11:22.544779 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:11:22.544790 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:11:22.544800 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:11:22.544810 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:11:22.544820 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:11:22.544831 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:11:22.544841 | orchestrator | 2026-04-09 02:11:22.544852 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-09 02:11:22.544862 | orchestrator | Thursday 09 April 2026 02:11:02 +0000 (0:00:02.360) 0:06:50.513 ******** 2026-04-09 02:11:22.544909 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:11:22.544919 | orchestrator | 2026-04-09 02:11:22.544930 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-09 02:11:22.544941 | orchestrator | Thursday 09 April 2026 02:11:02 +0000 (0:00:00.132) 0:06:50.645 ******** 2026-04-09 02:11:22.544951 | orchestrator | ok: [testbed-manager] 2026-04-09 02:11:22.544959 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:11:22.544968 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:11:22.544976 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:11:22.545039 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:11:22.545051 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:11:22.545060 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:11:22.545068 | orchestrator | 2026-04-09 02:11:22.545077 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-09 02:11:22.545087 | orchestrator | Thursday 09 April 2026 02:11:03 +0000 (0:00:01.049) 0:06:51.694 ******** 2026-04-09 02:11:22.545095 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:11:22.545104 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:11:22.545113 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:11:22.545121 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:11:22.545130 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:11:22.545138 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:11:22.545147 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:11:22.545155 | orchestrator | 2026-04-09 02:11:22.545164 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-09 02:11:22.545173 | orchestrator | Thursday 09 April 2026 02:11:04 +0000 (0:00:00.580) 0:06:52.275 ******** 2026-04-09 02:11:22.545183 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:11:22.545194 | orchestrator | 2026-04-09 02:11:22.545203 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-09 02:11:22.545211 | orchestrator | Thursday 09 April 2026 02:11:05 +0000 (0:00:01.245) 0:06:53.521 ******** 2026-04-09 02:11:22.545220 | orchestrator | ok: [testbed-manager] 2026-04-09 02:11:22.545229 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:11:22.545237 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:11:22.545246 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:11:22.545254 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:11:22.545263 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:11:22.545272 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:11:22.545281 | orchestrator | 2026-04-09 02:11:22.545289 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-09 02:11:22.545298 | orchestrator | Thursday 09 April 2026 02:11:06 +0000 (0:00:00.847) 0:06:54.368 ******** 2026-04-09 02:11:22.545307 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-09 02:11:22.545332 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-09 02:11:22.545342 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-09 02:11:22.545351 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-09 02:11:22.545359 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-09 02:11:22.545368 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-09 02:11:22.545376 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-09 02:11:22.545385 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-09 02:11:22.545394 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-09 02:11:22.545402 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-09 02:11:22.545411 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-09 02:11:22.545419 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-09 02:11:22.545439 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-09 02:11:22.545447 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-09 02:11:22.545456 | orchestrator | 2026-04-09 02:11:22.545465 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-09 02:11:22.545474 | orchestrator | Thursday 09 April 2026 02:11:08 +0000 (0:00:02.478) 0:06:56.847 ******** 2026-04-09 02:11:22.545482 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:11:22.545491 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:11:22.545499 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:11:22.545508 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:11:22.545516 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:11:22.545525 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:11:22.545533 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:11:22.545545 | orchestrator | 2026-04-09 02:11:22.545561 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-09 02:11:22.545574 | orchestrator | Thursday 09 April 2026 02:11:09 +0000 (0:00:00.818) 0:06:57.666 ******** 2026-04-09 02:11:22.545590 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:11:22.545607 | orchestrator | 2026-04-09 02:11:22.545622 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-09 02:11:22.545636 | orchestrator | Thursday 09 April 2026 02:11:10 +0000 (0:00:00.927) 0:06:58.593 ******** 2026-04-09 02:11:22.545651 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:11:22.545661 | orchestrator | ok: [testbed-manager] 2026-04-09 02:11:22.545669 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:11:22.545678 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:11:22.545693 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:11:22.545707 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:11:22.545720 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:11:22.545734 | orchestrator | 2026-04-09 02:11:22.545747 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-09 02:11:22.545758 | orchestrator | Thursday 09 April 2026 02:11:11 +0000 (0:00:00.871) 0:06:59.465 ******** 2026-04-09 02:11:22.545780 | orchestrator | ok: [testbed-manager] 2026-04-09 02:11:22.545794 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:11:22.545807 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:11:22.545821 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:11:22.545835 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:11:22.545848 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:11:22.545860 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:11:22.545872 | orchestrator | 2026-04-09 02:11:22.545885 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-09 02:11:22.545901 | orchestrator | Thursday 09 April 2026 02:11:12 +0000 (0:00:01.176) 0:07:00.642 ******** 2026-04-09 02:11:22.545914 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:11:22.545927 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:11:22.545940 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:11:22.545954 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:11:22.545968 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:11:22.545983 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:11:22.546094 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:11:22.546110 | orchestrator | 2026-04-09 02:11:22.546126 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-09 02:11:22.546140 | orchestrator | Thursday 09 April 2026 02:11:13 +0000 (0:00:00.596) 0:07:01.238 ******** 2026-04-09 02:11:22.546155 | orchestrator | ok: [testbed-manager] 2026-04-09 02:11:22.546170 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:11:22.546185 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:11:22.546201 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:11:22.546216 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:11:22.546246 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:11:22.546261 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:11:22.546277 | orchestrator | 2026-04-09 02:11:22.546292 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-09 02:11:22.546306 | orchestrator | Thursday 09 April 2026 02:11:14 +0000 (0:00:01.407) 0:07:02.646 ******** 2026-04-09 02:11:22.546315 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:11:22.546324 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:11:22.546333 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:11:22.546342 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:11:22.546350 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:11:22.546359 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:11:22.546367 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:11:22.546376 | orchestrator | 2026-04-09 02:11:22.546385 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-09 02:11:22.546394 | orchestrator | Thursday 09 April 2026 02:11:15 +0000 (0:00:00.538) 0:07:03.184 ******** 2026-04-09 02:11:22.546403 | orchestrator | ok: [testbed-manager] 2026-04-09 02:11:22.546411 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:11:22.546420 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:11:22.546429 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:11:22.546437 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:11:22.546446 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:11:22.546468 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:11:55.521342 | orchestrator | 2026-04-09 02:11:55.521459 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-09 02:11:55.521477 | orchestrator | Thursday 09 April 2026 02:11:22 +0000 (0:00:07.218) 0:07:10.403 ******** 2026-04-09 02:11:55.521491 | orchestrator | ok: [testbed-manager] 2026-04-09 02:11:55.521503 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:11:55.521516 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:11:55.521527 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:11:55.521538 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:11:55.521549 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:11:55.521560 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:11:55.521571 | orchestrator | 2026-04-09 02:11:55.521583 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-09 02:11:55.521594 | orchestrator | Thursday 09 April 2026 02:11:24 +0000 (0:00:01.561) 0:07:11.965 ******** 2026-04-09 02:11:55.521605 | orchestrator | ok: [testbed-manager] 2026-04-09 02:11:55.521616 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:11:55.521627 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:11:55.521637 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:11:55.521648 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:11:55.521659 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:11:55.521671 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:11:55.521682 | orchestrator | 2026-04-09 02:11:55.521693 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-09 02:11:55.521708 | orchestrator | Thursday 09 April 2026 02:11:25 +0000 (0:00:01.671) 0:07:13.637 ******** 2026-04-09 02:11:55.521724 | orchestrator | ok: [testbed-manager] 2026-04-09 02:11:55.521735 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:11:55.521747 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:11:55.521758 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:11:55.521768 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:11:55.521780 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:11:55.521791 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:11:55.521802 | orchestrator | 2026-04-09 02:11:55.521814 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-09 02:11:55.521825 | orchestrator | Thursday 09 April 2026 02:11:27 +0000 (0:00:01.689) 0:07:15.326 ******** 2026-04-09 02:11:55.521836 | orchestrator | ok: [testbed-manager] 2026-04-09 02:11:55.521847 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:11:55.521858 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:11:55.521896 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:11:55.521910 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:11:55.521926 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:11:55.521945 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:11:55.521964 | orchestrator | 2026-04-09 02:11:55.521983 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-09 02:11:55.522092 | orchestrator | Thursday 09 April 2026 02:11:28 +0000 (0:00:00.930) 0:07:16.257 ******** 2026-04-09 02:11:55.522119 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:11:55.522140 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:11:55.522159 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:11:55.522171 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:11:55.522182 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:11:55.522193 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:11:55.522204 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:11:55.522215 | orchestrator | 2026-04-09 02:11:55.522227 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-09 02:11:55.522238 | orchestrator | Thursday 09 April 2026 02:11:29 +0000 (0:00:01.093) 0:07:17.350 ******** 2026-04-09 02:11:55.522249 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:11:55.522260 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:11:55.522271 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:11:55.522285 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:11:55.522302 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:11:55.522314 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:11:55.522325 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:11:55.522336 | orchestrator | 2026-04-09 02:11:55.522348 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-09 02:11:55.522359 | orchestrator | Thursday 09 April 2026 02:11:30 +0000 (0:00:00.605) 0:07:17.955 ******** 2026-04-09 02:11:55.522370 | orchestrator | ok: [testbed-manager] 2026-04-09 02:11:55.522400 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:11:55.522412 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:11:55.522423 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:11:55.522434 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:11:55.522445 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:11:55.522456 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:11:55.522467 | orchestrator | 2026-04-09 02:11:55.522479 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-09 02:11:55.522490 | orchestrator | Thursday 09 April 2026 02:11:30 +0000 (0:00:00.637) 0:07:18.593 ******** 2026-04-09 02:11:55.522501 | orchestrator | ok: [testbed-manager] 2026-04-09 02:11:55.522512 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:11:55.522523 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:11:55.522535 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:11:55.522546 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:11:55.522557 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:11:55.522569 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:11:55.522580 | orchestrator | 2026-04-09 02:11:55.522592 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-09 02:11:55.522603 | orchestrator | Thursday 09 April 2026 02:11:31 +0000 (0:00:00.571) 0:07:19.164 ******** 2026-04-09 02:11:55.522614 | orchestrator | ok: [testbed-manager] 2026-04-09 02:11:55.522625 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:11:55.522636 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:11:55.522647 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:11:55.522658 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:11:55.522673 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:11:55.522690 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:11:55.522701 | orchestrator | 2026-04-09 02:11:55.522713 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-09 02:11:55.522724 | orchestrator | Thursday 09 April 2026 02:11:32 +0000 (0:00:00.801) 0:07:19.965 ******** 2026-04-09 02:11:55.522735 | orchestrator | ok: [testbed-manager] 2026-04-09 02:11:55.522746 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:11:55.522768 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:11:55.522780 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:11:55.522791 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:11:55.522802 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:11:55.522813 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:11:55.522823 | orchestrator | 2026-04-09 02:11:55.522854 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-09 02:11:55.522866 | orchestrator | Thursday 09 April 2026 02:11:37 +0000 (0:00:05.420) 0:07:25.385 ******** 2026-04-09 02:11:55.522877 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:11:55.522888 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:11:55.522900 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:11:55.522911 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:11:55.522922 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:11:55.522933 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:11:55.522944 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:11:55.522964 | orchestrator | 2026-04-09 02:11:55.522977 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-09 02:11:55.522988 | orchestrator | Thursday 09 April 2026 02:11:38 +0000 (0:00:00.681) 0:07:26.066 ******** 2026-04-09 02:11:55.523140 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:11:55.523165 | orchestrator | 2026-04-09 02:11:55.523177 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-09 02:11:55.523188 | orchestrator | Thursday 09 April 2026 02:11:39 +0000 (0:00:01.135) 0:07:27.202 ******** 2026-04-09 02:11:55.523199 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:11:55.523210 | orchestrator | ok: [testbed-manager] 2026-04-09 02:11:55.523222 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:11:55.523239 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:11:55.523255 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:11:55.523266 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:11:55.523277 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:11:55.523288 | orchestrator | 2026-04-09 02:11:55.523300 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-09 02:11:55.523311 | orchestrator | Thursday 09 April 2026 02:11:41 +0000 (0:00:01.824) 0:07:29.027 ******** 2026-04-09 02:11:55.523322 | orchestrator | ok: [testbed-manager] 2026-04-09 02:11:55.523333 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:11:55.523344 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:11:55.523355 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:11:55.523365 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:11:55.523376 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:11:55.523388 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:11:55.523399 | orchestrator | 2026-04-09 02:11:55.523410 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-09 02:11:55.523421 | orchestrator | Thursday 09 April 2026 02:11:42 +0000 (0:00:01.173) 0:07:30.200 ******** 2026-04-09 02:11:55.523432 | orchestrator | ok: [testbed-manager] 2026-04-09 02:11:55.523443 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:11:55.523454 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:11:55.523465 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:11:55.523476 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:11:55.523487 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:11:55.523499 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:11:55.523514 | orchestrator | 2026-04-09 02:11:55.523531 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-09 02:11:55.523542 | orchestrator | Thursday 09 April 2026 02:11:43 +0000 (0:00:00.911) 0:07:31.111 ******** 2026-04-09 02:11:55.523562 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 02:11:55.523575 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 02:11:55.523597 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 02:11:55.523609 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 02:11:55.523620 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 02:11:55.523910 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 02:11:55.523922 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 02:11:55.523933 | orchestrator | 2026-04-09 02:11:55.523945 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-09 02:11:55.523957 | orchestrator | Thursday 09 April 2026 02:11:45 +0000 (0:00:01.930) 0:07:33.041 ******** 2026-04-09 02:11:55.523969 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:11:55.523986 | orchestrator | 2026-04-09 02:11:55.524092 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-09 02:11:55.524106 | orchestrator | Thursday 09 April 2026 02:11:46 +0000 (0:00:00.908) 0:07:33.950 ******** 2026-04-09 02:11:55.524117 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:11:55.524129 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:11:55.524140 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:11:55.524151 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:11:55.524161 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:11:55.524172 | orchestrator | changed: [testbed-manager] 2026-04-09 02:11:55.524183 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:11:55.524194 | orchestrator | 2026-04-09 02:11:55.524218 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-09 02:12:27.663963 | orchestrator | Thursday 09 April 2026 02:11:55 +0000 (0:00:09.431) 0:07:43.382 ******** 2026-04-09 02:12:27.664146 | orchestrator | ok: [testbed-manager] 2026-04-09 02:12:27.664162 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:12:27.664171 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:12:27.664179 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:12:27.664187 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:12:27.664196 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:12:27.664204 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:12:27.664213 | orchestrator | 2026-04-09 02:12:27.664222 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-09 02:12:27.664231 | orchestrator | Thursday 09 April 2026 02:11:57 +0000 (0:00:02.087) 0:07:45.469 ******** 2026-04-09 02:12:27.664239 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:12:27.664247 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:12:27.664256 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:12:27.664264 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:12:27.664272 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:12:27.664280 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:12:27.664288 | orchestrator | 2026-04-09 02:12:27.664297 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-09 02:12:27.664305 | orchestrator | Thursday 09 April 2026 02:11:58 +0000 (0:00:01.360) 0:07:46.829 ******** 2026-04-09 02:12:27.664313 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:12:27.664322 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:12:27.664330 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:12:27.664338 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:12:27.664346 | orchestrator | changed: [testbed-manager] 2026-04-09 02:12:27.664376 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:12:27.664384 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:12:27.664392 | orchestrator | 2026-04-09 02:12:27.664400 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-09 02:12:27.664408 | orchestrator | 2026-04-09 02:12:27.664416 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-09 02:12:27.664425 | orchestrator | Thursday 09 April 2026 02:12:00 +0000 (0:00:01.262) 0:07:48.092 ******** 2026-04-09 02:12:27.664433 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:12:27.664441 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:12:27.664449 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:12:27.664456 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:12:27.664464 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:12:27.664474 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:12:27.664483 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:12:27.664493 | orchestrator | 2026-04-09 02:12:27.664502 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-09 02:12:27.664512 | orchestrator | 2026-04-09 02:12:27.664521 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-09 02:12:27.664531 | orchestrator | Thursday 09 April 2026 02:12:01 +0000 (0:00:00.806) 0:07:48.898 ******** 2026-04-09 02:12:27.664544 | orchestrator | changed: [testbed-manager] 2026-04-09 02:12:27.664557 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:12:27.664570 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:12:27.664584 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:12:27.664597 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:12:27.664610 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:12:27.664623 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:12:27.664636 | orchestrator | 2026-04-09 02:12:27.664650 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-09 02:12:27.664680 | orchestrator | Thursday 09 April 2026 02:12:02 +0000 (0:00:01.360) 0:07:50.259 ******** 2026-04-09 02:12:27.664694 | orchestrator | ok: [testbed-manager] 2026-04-09 02:12:27.664707 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:12:27.664720 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:12:27.664733 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:12:27.664746 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:12:27.664760 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:12:27.664774 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:12:27.664786 | orchestrator | 2026-04-09 02:12:27.664801 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-09 02:12:27.664816 | orchestrator | Thursday 09 April 2026 02:12:03 +0000 (0:00:01.482) 0:07:51.742 ******** 2026-04-09 02:12:27.664830 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:12:27.664843 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:12:27.664856 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:12:27.664871 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:12:27.664885 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:12:27.664897 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:12:27.664905 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:12:27.664913 | orchestrator | 2026-04-09 02:12:27.664921 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-09 02:12:27.664929 | orchestrator | Thursday 09 April 2026 02:12:04 +0000 (0:00:00.530) 0:07:52.272 ******** 2026-04-09 02:12:27.664938 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:12:27.664947 | orchestrator | 2026-04-09 02:12:27.664956 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-09 02:12:27.664963 | orchestrator | Thursday 09 April 2026 02:12:05 +0000 (0:00:01.152) 0:07:53.424 ******** 2026-04-09 02:12:27.664973 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:12:27.664994 | orchestrator | 2026-04-09 02:12:27.665002 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-09 02:12:27.665069 | orchestrator | Thursday 09 April 2026 02:12:06 +0000 (0:00:00.938) 0:07:54.362 ******** 2026-04-09 02:12:27.665082 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:12:27.665090 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:12:27.665098 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:12:27.665106 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:12:27.665114 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:12:27.665122 | orchestrator | changed: [testbed-manager] 2026-04-09 02:12:27.665130 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:12:27.665138 | orchestrator | 2026-04-09 02:12:27.665165 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-09 02:12:27.665174 | orchestrator | Thursday 09 April 2026 02:12:15 +0000 (0:00:08.865) 0:08:03.227 ******** 2026-04-09 02:12:27.665182 | orchestrator | changed: [testbed-manager] 2026-04-09 02:12:27.665189 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:12:27.665197 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:12:27.665205 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:12:27.665213 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:12:27.665221 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:12:27.665228 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:12:27.665236 | orchestrator | 2026-04-09 02:12:27.665244 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-09 02:12:27.665252 | orchestrator | Thursday 09 April 2026 02:12:16 +0000 (0:00:01.108) 0:08:04.336 ******** 2026-04-09 02:12:27.665260 | orchestrator | changed: [testbed-manager] 2026-04-09 02:12:27.665268 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:12:27.665276 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:12:27.665284 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:12:27.665291 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:12:27.665299 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:12:27.665307 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:12:27.665315 | orchestrator | 2026-04-09 02:12:27.665323 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-09 02:12:27.665331 | orchestrator | Thursday 09 April 2026 02:12:17 +0000 (0:00:01.345) 0:08:05.682 ******** 2026-04-09 02:12:27.665339 | orchestrator | changed: [testbed-manager] 2026-04-09 02:12:27.665346 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:12:27.665354 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:12:27.665362 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:12:27.665370 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:12:27.665378 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:12:27.665386 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:12:27.665393 | orchestrator | 2026-04-09 02:12:27.665402 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-09 02:12:27.665410 | orchestrator | Thursday 09 April 2026 02:12:19 +0000 (0:00:02.051) 0:08:07.733 ******** 2026-04-09 02:12:27.665417 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:12:27.665425 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:12:27.665433 | orchestrator | changed: [testbed-manager] 2026-04-09 02:12:27.665441 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:12:27.665449 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:12:27.665456 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:12:27.665464 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:12:27.665472 | orchestrator | 2026-04-09 02:12:27.665480 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-09 02:12:27.665488 | orchestrator | Thursday 09 April 2026 02:12:21 +0000 (0:00:01.359) 0:08:09.093 ******** 2026-04-09 02:12:27.665496 | orchestrator | changed: [testbed-manager] 2026-04-09 02:12:27.665504 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:12:27.665519 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:12:27.665527 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:12:27.665534 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:12:27.665542 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:12:27.665550 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:12:27.665558 | orchestrator | 2026-04-09 02:12:27.665566 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-09 02:12:27.665574 | orchestrator | 2026-04-09 02:12:27.665589 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-09 02:12:27.665597 | orchestrator | Thursday 09 April 2026 02:12:22 +0000 (0:00:01.143) 0:08:10.237 ******** 2026-04-09 02:12:27.665605 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:12:27.665613 | orchestrator | 2026-04-09 02:12:27.665621 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-09 02:12:27.665629 | orchestrator | Thursday 09 April 2026 02:12:23 +0000 (0:00:00.900) 0:08:11.138 ******** 2026-04-09 02:12:27.665637 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:12:27.665645 | orchestrator | ok: [testbed-manager] 2026-04-09 02:12:27.665653 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:12:27.665661 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:12:27.665669 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:12:27.665676 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:12:27.665684 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:12:27.665692 | orchestrator | 2026-04-09 02:12:27.665700 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-09 02:12:27.665708 | orchestrator | Thursday 09 April 2026 02:12:24 +0000 (0:00:01.132) 0:08:12.271 ******** 2026-04-09 02:12:27.665716 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:12:27.665724 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:12:27.665732 | orchestrator | changed: [testbed-manager] 2026-04-09 02:12:27.665740 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:12:27.665748 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:12:27.665756 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:12:27.665763 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:12:27.665771 | orchestrator | 2026-04-09 02:12:27.665779 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-09 02:12:27.665787 | orchestrator | Thursday 09 April 2026 02:12:25 +0000 (0:00:01.251) 0:08:13.523 ******** 2026-04-09 02:12:27.665795 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:12:27.665803 | orchestrator | 2026-04-09 02:12:27.665811 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-09 02:12:27.665819 | orchestrator | Thursday 09 April 2026 02:12:26 +0000 (0:00:01.122) 0:08:14.645 ******** 2026-04-09 02:12:27.665827 | orchestrator | ok: [testbed-manager] 2026-04-09 02:12:27.665835 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:12:27.665842 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:12:27.665853 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:12:27.665866 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:12:27.665879 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:12:27.665892 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:12:27.665905 | orchestrator | 2026-04-09 02:12:27.665926 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-09 02:12:29.384159 | orchestrator | Thursday 09 April 2026 02:12:27 +0000 (0:00:00.881) 0:08:15.527 ******** 2026-04-09 02:12:29.384271 | orchestrator | changed: [testbed-manager] 2026-04-09 02:12:29.384287 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:12:29.384296 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:12:29.384305 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:12:29.384314 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:12:29.384323 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:12:29.384331 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:12:29.384369 | orchestrator | 2026-04-09 02:12:29.384381 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:12:29.384393 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-09 02:12:29.384405 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-09 02:12:29.384416 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-09 02:12:29.384425 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-09 02:12:29.384436 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-04-09 02:12:29.384447 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-09 02:12:29.384454 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-09 02:12:29.384460 | orchestrator | 2026-04-09 02:12:29.384466 | orchestrator | 2026-04-09 02:12:29.384473 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:12:29.384480 | orchestrator | Thursday 09 April 2026 02:12:28 +0000 (0:00:01.151) 0:08:16.679 ******** 2026-04-09 02:12:29.384486 | orchestrator | =============================================================================== 2026-04-09 02:12:29.384492 | orchestrator | osism.commons.packages : Install required packages --------------------- 79.83s 2026-04-09 02:12:29.384498 | orchestrator | osism.commons.packages : Download required packages -------------------- 35.37s 2026-04-09 02:12:29.384504 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.24s 2026-04-09 02:12:29.384511 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.98s 2026-04-09 02:12:29.384517 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 14.15s 2026-04-09 02:12:29.384537 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 14.01s 2026-04-09 02:12:29.384547 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.85s 2026-04-09 02:12:29.384557 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.43s 2026-04-09 02:12:29.384573 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.87s 2026-04-09 02:12:29.384582 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.63s 2026-04-09 02:12:29.384592 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.52s 2026-04-09 02:12:29.384601 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.45s 2026-04-09 02:12:29.384611 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.41s 2026-04-09 02:12:29.384621 | orchestrator | osism.services.docker : Install containerd package ---------------------- 8.00s 2026-04-09 02:12:29.384630 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.89s 2026-04-09 02:12:29.384639 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.22s 2026-04-09 02:12:29.384649 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.10s 2026-04-09 02:12:29.384658 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.84s 2026-04-09 02:12:29.384668 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.79s 2026-04-09 02:12:29.384677 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 5.58s 2026-04-09 02:12:29.771123 | orchestrator | + osism apply fail2ban 2026-04-09 02:12:43.139181 | orchestrator | 2026-04-09 02:12:43 | INFO  | Task 8da26d4e-9266-4218-826c-d935a0a98d90 (fail2ban) was prepared for execution. 2026-04-09 02:12:43.139264 | orchestrator | 2026-04-09 02:12:43 | INFO  | It takes a moment until task 8da26d4e-9266-4218-826c-d935a0a98d90 (fail2ban) has been started and output is visible here. 2026-04-09 02:13:06.759967 | orchestrator | 2026-04-09 02:13:06.760165 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-09 02:13:06.760195 | orchestrator | 2026-04-09 02:13:06.760215 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-09 02:13:06.760233 | orchestrator | Thursday 09 April 2026 02:12:48 +0000 (0:00:00.317) 0:00:00.317 ******** 2026-04-09 02:13:06.760253 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 02:13:06.760274 | orchestrator | 2026-04-09 02:13:06.760292 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-09 02:13:06.760309 | orchestrator | Thursday 09 April 2026 02:12:49 +0000 (0:00:01.261) 0:00:01.578 ******** 2026-04-09 02:13:06.760327 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:13:06.760346 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:13:06.760363 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:13:06.760381 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:13:06.760399 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:13:06.760415 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:13:06.760432 | orchestrator | changed: [testbed-manager] 2026-04-09 02:13:06.760450 | orchestrator | 2026-04-09 02:13:06.760467 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-09 02:13:06.760485 | orchestrator | Thursday 09 April 2026 02:13:01 +0000 (0:00:11.861) 0:00:13.440 ******** 2026-04-09 02:13:06.760505 | orchestrator | changed: [testbed-manager] 2026-04-09 02:13:06.760522 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:13:06.760539 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:13:06.760557 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:13:06.760575 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:13:06.760593 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:13:06.760611 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:13:06.760629 | orchestrator | 2026-04-09 02:13:06.760646 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-09 02:13:06.760664 | orchestrator | Thursday 09 April 2026 02:13:02 +0000 (0:00:01.561) 0:00:15.002 ******** 2026-04-09 02:13:06.760682 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:13:06.760701 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:13:06.760719 | orchestrator | ok: [testbed-manager] 2026-04-09 02:13:06.760736 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:13:06.760754 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:13:06.760771 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:13:06.760788 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:13:06.760806 | orchestrator | 2026-04-09 02:13:06.760825 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-09 02:13:06.760844 | orchestrator | Thursday 09 April 2026 02:13:04 +0000 (0:00:01.591) 0:00:16.593 ******** 2026-04-09 02:13:06.760861 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:13:06.760878 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:13:06.760895 | orchestrator | changed: [testbed-manager] 2026-04-09 02:13:06.760912 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:13:06.760929 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:13:06.760947 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:13:06.760964 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:13:06.760981 | orchestrator | 2026-04-09 02:13:06.760998 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:13:06.761015 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:13:06.761130 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:13:06.761149 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:13:06.761165 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:13:06.761182 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:13:06.761199 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:13:06.761216 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:13:06.761233 | orchestrator | 2026-04-09 02:13:06.761249 | orchestrator | 2026-04-09 02:13:06.761266 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:13:06.761283 | orchestrator | Thursday 09 April 2026 02:13:06 +0000 (0:00:01.746) 0:00:18.340 ******** 2026-04-09 02:13:06.761299 | orchestrator | =============================================================================== 2026-04-09 02:13:06.761315 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.86s 2026-04-09 02:13:06.761346 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.75s 2026-04-09 02:13:06.761362 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.59s 2026-04-09 02:13:06.761379 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.56s 2026-04-09 02:13:06.761397 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.26s 2026-04-09 02:13:07.116764 | orchestrator | + osism apply network 2026-04-09 02:13:19.374469 | orchestrator | 2026-04-09 02:13:19 | INFO  | Task 7fe7681b-18c4-4456-a030-3b2c8d48f548 (network) was prepared for execution. 2026-04-09 02:13:19.374581 | orchestrator | 2026-04-09 02:13:19 | INFO  | It takes a moment until task 7fe7681b-18c4-4456-a030-3b2c8d48f548 (network) has been started and output is visible here. 2026-04-09 02:13:50.708122 | orchestrator | 2026-04-09 02:13:50.708262 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-09 02:13:50.708292 | orchestrator | 2026-04-09 02:13:50.708310 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-09 02:13:50.708330 | orchestrator | Thursday 09 April 2026 02:13:24 +0000 (0:00:00.279) 0:00:00.279 ******** 2026-04-09 02:13:50.708349 | orchestrator | ok: [testbed-manager] 2026-04-09 02:13:50.708369 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:13:50.708388 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:13:50.708407 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:13:50.708426 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:13:50.708446 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:13:50.708466 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:13:50.708483 | orchestrator | 2026-04-09 02:13:50.708495 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-09 02:13:50.708507 | orchestrator | Thursday 09 April 2026 02:13:24 +0000 (0:00:00.815) 0:00:01.095 ******** 2026-04-09 02:13:50.708521 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 02:13:50.708535 | orchestrator | 2026-04-09 02:13:50.708547 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-09 02:13:50.708558 | orchestrator | Thursday 09 April 2026 02:13:26 +0000 (0:00:01.299) 0:00:02.395 ******** 2026-04-09 02:13:50.708597 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:13:50.708611 | orchestrator | ok: [testbed-manager] 2026-04-09 02:13:50.708624 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:13:50.708637 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:13:50.708649 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:13:50.708662 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:13:50.708674 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:13:50.708687 | orchestrator | 2026-04-09 02:13:50.708701 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-09 02:13:50.708714 | orchestrator | Thursday 09 April 2026 02:13:28 +0000 (0:00:02.015) 0:00:04.411 ******** 2026-04-09 02:13:50.708725 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:13:50.708736 | orchestrator | ok: [testbed-manager] 2026-04-09 02:13:50.708747 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:13:50.708759 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:13:50.708770 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:13:50.708781 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:13:50.708792 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:13:50.708802 | orchestrator | 2026-04-09 02:13:50.708813 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-09 02:13:50.708825 | orchestrator | Thursday 09 April 2026 02:13:30 +0000 (0:00:02.215) 0:00:06.626 ******** 2026-04-09 02:13:50.708836 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-09 02:13:50.708848 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-09 02:13:50.708859 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-09 02:13:50.708870 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-09 02:13:50.708881 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-09 02:13:50.708891 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-09 02:13:50.708902 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-09 02:13:50.708913 | orchestrator | 2026-04-09 02:13:50.708942 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-09 02:13:50.708954 | orchestrator | Thursday 09 April 2026 02:13:31 +0000 (0:00:01.085) 0:00:07.711 ******** 2026-04-09 02:13:50.708970 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 02:13:50.708983 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 02:13:50.708994 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 02:13:50.709005 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 02:13:50.709016 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 02:13:50.709027 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 02:13:50.709061 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 02:13:50.709072 | orchestrator | 2026-04-09 02:13:50.709084 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-09 02:13:50.709095 | orchestrator | Thursday 09 April 2026 02:13:35 +0000 (0:00:03.724) 0:00:11.435 ******** 2026-04-09 02:13:50.709107 | orchestrator | changed: [testbed-manager] 2026-04-09 02:13:50.709118 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:13:50.709129 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:13:50.709140 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:13:50.709151 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:13:50.709162 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:13:50.709173 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:13:50.709184 | orchestrator | 2026-04-09 02:13:50.709196 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-09 02:13:50.709207 | orchestrator | Thursday 09 April 2026 02:13:36 +0000 (0:00:01.690) 0:00:13.126 ******** 2026-04-09 02:13:50.709218 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 02:13:50.709229 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 02:13:50.709240 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 02:13:50.709251 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 02:13:50.709262 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 02:13:50.709339 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 02:13:50.709353 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 02:13:50.709364 | orchestrator | 2026-04-09 02:13:50.709375 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-09 02:13:50.709386 | orchestrator | Thursday 09 April 2026 02:13:38 +0000 (0:00:01.875) 0:00:15.002 ******** 2026-04-09 02:13:50.709397 | orchestrator | ok: [testbed-manager] 2026-04-09 02:13:50.709408 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:13:50.709419 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:13:50.709430 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:13:50.709441 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:13:50.709452 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:13:50.709463 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:13:50.709474 | orchestrator | 2026-04-09 02:13:50.709485 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-09 02:13:50.709519 | orchestrator | Thursday 09 April 2026 02:13:39 +0000 (0:00:01.232) 0:00:16.234 ******** 2026-04-09 02:13:50.709531 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:13:50.709543 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:13:50.709554 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:13:50.709564 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:13:50.709575 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:13:50.709593 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:13:50.709611 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:13:50.709629 | orchestrator | 2026-04-09 02:13:50.709653 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-09 02:13:50.709676 | orchestrator | Thursday 09 April 2026 02:13:40 +0000 (0:00:00.780) 0:00:17.014 ******** 2026-04-09 02:13:50.709693 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:13:50.709712 | orchestrator | ok: [testbed-manager] 2026-04-09 02:13:50.709730 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:13:50.709749 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:13:50.709767 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:13:50.709786 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:13:50.709797 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:13:50.709808 | orchestrator | 2026-04-09 02:13:50.709819 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-09 02:13:50.709835 | orchestrator | Thursday 09 April 2026 02:13:43 +0000 (0:00:02.258) 0:00:19.272 ******** 2026-04-09 02:13:50.709852 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:13:50.709877 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:13:50.709898 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:13:50.709915 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:13:50.709931 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:13:50.709948 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:13:50.709966 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-04-09 02:13:50.709985 | orchestrator | 2026-04-09 02:13:50.710002 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-09 02:13:50.710230 | orchestrator | Thursday 09 April 2026 02:13:44 +0000 (0:00:01.001) 0:00:20.274 ******** 2026-04-09 02:13:50.710263 | orchestrator | ok: [testbed-manager] 2026-04-09 02:13:50.710283 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:13:50.710295 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:13:50.710306 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:13:50.710317 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:13:50.710328 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:13:50.710339 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:13:50.710349 | orchestrator | 2026-04-09 02:13:50.710361 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-09 02:13:50.710372 | orchestrator | Thursday 09 April 2026 02:13:45 +0000 (0:00:01.866) 0:00:22.140 ******** 2026-04-09 02:13:50.710384 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 02:13:50.710410 | orchestrator | 2026-04-09 02:13:50.710422 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-09 02:13:50.710436 | orchestrator | Thursday 09 April 2026 02:13:47 +0000 (0:00:01.425) 0:00:23.566 ******** 2026-04-09 02:13:50.710454 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:13:50.710473 | orchestrator | ok: [testbed-manager] 2026-04-09 02:13:50.710490 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:13:50.710508 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:13:50.710528 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:13:50.710562 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:13:50.710588 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:13:50.710606 | orchestrator | 2026-04-09 02:13:50.710623 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-09 02:13:50.710640 | orchestrator | Thursday 09 April 2026 02:13:48 +0000 (0:00:01.323) 0:00:24.889 ******** 2026-04-09 02:13:50.710658 | orchestrator | ok: [testbed-manager] 2026-04-09 02:13:50.710676 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:13:50.710694 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:13:50.710711 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:13:50.710728 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:13:50.710747 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:13:50.710766 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:13:50.710784 | orchestrator | 2026-04-09 02:13:50.710801 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-09 02:13:50.710850 | orchestrator | Thursday 09 April 2026 02:13:49 +0000 (0:00:00.725) 0:00:25.615 ******** 2026-04-09 02:13:50.710863 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 02:13:50.710874 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 02:13:50.710885 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 02:13:50.710896 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 02:13:50.710907 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 02:13:50.710918 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 02:13:50.710929 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 02:13:50.710940 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 02:13:50.710951 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 02:13:50.710961 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 02:13:50.710972 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 02:13:50.710983 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 02:13:50.710994 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 02:13:50.711005 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 02:13:50.711015 | orchestrator | 2026-04-09 02:13:50.711204 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-09 02:14:09.210338 | orchestrator | Thursday 09 April 2026 02:13:50 +0000 (0:00:01.330) 0:00:26.945 ******** 2026-04-09 02:14:09.210416 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:14:09.210424 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:14:09.210429 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:14:09.210434 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:14:09.210439 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:14:09.210444 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:14:09.210449 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:14:09.210454 | orchestrator | 2026-04-09 02:14:09.210459 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-09 02:14:09.210481 | orchestrator | Thursday 09 April 2026 02:13:51 +0000 (0:00:00.715) 0:00:27.661 ******** 2026-04-09 02:14:09.210488 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-manager, testbed-node-2, testbed-node-0, testbed-node-3, testbed-node-5, testbed-node-4 2026-04-09 02:14:09.210495 | orchestrator | 2026-04-09 02:14:09.210500 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-09 02:14:09.210505 | orchestrator | Thursday 09 April 2026 02:13:56 +0000 (0:00:04.961) 0:00:32.622 ******** 2026-04-09 02:14:09.210511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-04-09 02:14:09.210524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-04-09 02:14:09.210529 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-04-09 02:14:09.210535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-04-09 02:14:09.210539 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-04-09 02:14:09.210561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-04-09 02:14:09.210567 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-04-09 02:14:09.210571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-04-09 02:14:09.210580 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-04-09 02:14:09.210584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-04-09 02:14:09.210589 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-04-09 02:14:09.210605 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-04-09 02:14:09.210615 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-04-09 02:14:09.210620 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-04-09 02:14:09.210624 | orchestrator | 2026-04-09 02:14:09.210629 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-09 02:14:09.210634 | orchestrator | Thursday 09 April 2026 02:14:02 +0000 (0:00:06.503) 0:00:39.126 ******** 2026-04-09 02:14:09.210639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-04-09 02:14:09.210644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-04-09 02:14:09.210649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-04-09 02:14:09.210653 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-04-09 02:14:09.210658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-04-09 02:14:09.210666 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-04-09 02:14:09.210671 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-04-09 02:14:09.210676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-04-09 02:14:09.210681 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-04-09 02:14:09.210686 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-04-09 02:14:09.210690 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-04-09 02:14:09.210699 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-04-09 02:14:09.210710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-04-09 02:14:16.214648 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-04-09 02:14:16.214732 | orchestrator | 2026-04-09 02:14:16.214743 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-09 02:14:16.214751 | orchestrator | Thursday 09 April 2026 02:14:09 +0000 (0:00:06.323) 0:00:45.449 ******** 2026-04-09 02:14:16.214760 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 02:14:16.214767 | orchestrator | 2026-04-09 02:14:16.214774 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-09 02:14:16.214781 | orchestrator | Thursday 09 April 2026 02:14:10 +0000 (0:00:01.421) 0:00:46.870 ******** 2026-04-09 02:14:16.214787 | orchestrator | ok: [testbed-manager] 2026-04-09 02:14:16.214794 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:14:16.214801 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:14:16.214807 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:14:16.214814 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:14:16.214820 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:14:16.214827 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:14:16.214833 | orchestrator | 2026-04-09 02:14:16.214839 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-09 02:14:16.214846 | orchestrator | Thursday 09 April 2026 02:14:11 +0000 (0:00:01.256) 0:00:48.127 ******** 2026-04-09 02:14:16.214853 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 02:14:16.214860 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 02:14:16.214866 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 02:14:16.214873 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 02:14:16.214879 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 02:14:16.214886 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 02:14:16.214892 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 02:14:16.214899 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:14:16.214906 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 02:14:16.214912 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 02:14:16.214919 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 02:14:16.214940 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 02:14:16.214946 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 02:14:16.214953 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:14:16.214959 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 02:14:16.214982 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 02:14:16.214989 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 02:14:16.214995 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:14:16.215001 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 02:14:16.215011 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 02:14:16.215021 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 02:14:16.215057 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 02:14:16.215068 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 02:14:16.215079 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:14:16.215090 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 02:14:16.215100 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 02:14:16.215109 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 02:14:16.215119 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 02:14:16.215127 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:14:16.215133 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:14:16.215140 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 02:14:16.215146 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 02:14:16.215153 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 02:14:16.215159 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 02:14:16.215165 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:14:16.215171 | orchestrator | 2026-04-09 02:14:16.215178 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-09 02:14:16.215197 | orchestrator | Thursday 09 April 2026 02:14:14 +0000 (0:00:02.368) 0:00:50.495 ******** 2026-04-09 02:14:16.215204 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:14:16.215211 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:14:16.215219 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:14:16.215226 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:14:16.215234 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:14:16.215241 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:14:16.215248 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:14:16.215255 | orchestrator | 2026-04-09 02:14:16.215262 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-09 02:14:16.215270 | orchestrator | Thursday 09 April 2026 02:14:14 +0000 (0:00:00.712) 0:00:51.207 ******** 2026-04-09 02:14:16.215277 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:14:16.215284 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:14:16.215291 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:14:16.215299 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:14:16.215307 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:14:16.215314 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:14:16.215321 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:14:16.215328 | orchestrator | 2026-04-09 02:14:16.215336 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:14:16.215344 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 02:14:16.215353 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 02:14:16.215369 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 02:14:16.215377 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 02:14:16.215384 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 02:14:16.215392 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 02:14:16.215399 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 02:14:16.215406 | orchestrator | 2026-04-09 02:14:16.215412 | orchestrator | 2026-04-09 02:14:16.215419 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:14:16.215425 | orchestrator | Thursday 09 April 2026 02:14:15 +0000 (0:00:00.764) 0:00:51.971 ******** 2026-04-09 02:14:16.215431 | orchestrator | =============================================================================== 2026-04-09 02:14:16.215441 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.50s 2026-04-09 02:14:16.215448 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.32s 2026-04-09 02:14:16.215454 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.96s 2026-04-09 02:14:16.215460 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.72s 2026-04-09 02:14:16.215466 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.37s 2026-04-09 02:14:16.215472 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.26s 2026-04-09 02:14:16.215478 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 2.22s 2026-04-09 02:14:16.215484 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.02s 2026-04-09 02:14:16.215491 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.88s 2026-04-09 02:14:16.215497 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.87s 2026-04-09 02:14:16.215503 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.69s 2026-04-09 02:14:16.215509 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.43s 2026-04-09 02:14:16.215515 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.42s 2026-04-09 02:14:16.215522 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.33s 2026-04-09 02:14:16.215528 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.32s 2026-04-09 02:14:16.215534 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.30s 2026-04-09 02:14:16.215540 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.26s 2026-04-09 02:14:16.215546 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.23s 2026-04-09 02:14:16.215552 | orchestrator | osism.commons.network : Create required directories --------------------- 1.09s 2026-04-09 02:14:16.215558 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.00s 2026-04-09 02:14:16.609323 | orchestrator | + osism apply wireguard 2026-04-09 02:14:29.024447 | orchestrator | 2026-04-09 02:14:29 | INFO  | Task afa04b38-e2fb-4742-a282-d69363bd5c42 (wireguard) was prepared for execution. 2026-04-09 02:14:29.024522 | orchestrator | 2026-04-09 02:14:29 | INFO  | It takes a moment until task afa04b38-e2fb-4742-a282-d69363bd5c42 (wireguard) has been started and output is visible here. 2026-04-09 02:14:51.594418 | orchestrator | 2026-04-09 02:14:51.594578 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-09 02:14:51.594654 | orchestrator | 2026-04-09 02:14:51.594677 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-09 02:14:51.594697 | orchestrator | Thursday 09 April 2026 02:14:33 +0000 (0:00:00.238) 0:00:00.238 ******** 2026-04-09 02:14:51.594716 | orchestrator | ok: [testbed-manager] 2026-04-09 02:14:51.594741 | orchestrator | 2026-04-09 02:14:51.594766 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-09 02:14:51.594784 | orchestrator | Thursday 09 April 2026 02:14:35 +0000 (0:00:01.727) 0:00:01.966 ******** 2026-04-09 02:14:51.594802 | orchestrator | changed: [testbed-manager] 2026-04-09 02:14:51.594823 | orchestrator | 2026-04-09 02:14:51.594841 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-09 02:14:51.594853 | orchestrator | Thursday 09 April 2026 02:14:43 +0000 (0:00:07.547) 0:00:09.513 ******** 2026-04-09 02:14:51.594864 | orchestrator | changed: [testbed-manager] 2026-04-09 02:14:51.594874 | orchestrator | 2026-04-09 02:14:51.594885 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-09 02:14:51.594897 | orchestrator | Thursday 09 April 2026 02:14:43 +0000 (0:00:00.684) 0:00:10.197 ******** 2026-04-09 02:14:51.594907 | orchestrator | changed: [testbed-manager] 2026-04-09 02:14:51.594918 | orchestrator | 2026-04-09 02:14:51.594931 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-09 02:14:51.594944 | orchestrator | Thursday 09 April 2026 02:14:44 +0000 (0:00:00.487) 0:00:10.685 ******** 2026-04-09 02:14:51.594957 | orchestrator | ok: [testbed-manager] 2026-04-09 02:14:51.594970 | orchestrator | 2026-04-09 02:14:51.594982 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-09 02:14:51.594995 | orchestrator | Thursday 09 April 2026 02:14:44 +0000 (0:00:00.743) 0:00:11.429 ******** 2026-04-09 02:14:51.595008 | orchestrator | ok: [testbed-manager] 2026-04-09 02:14:51.595020 | orchestrator | 2026-04-09 02:14:51.595072 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-09 02:14:51.595091 | orchestrator | Thursday 09 April 2026 02:14:45 +0000 (0:00:00.447) 0:00:11.876 ******** 2026-04-09 02:14:51.595103 | orchestrator | ok: [testbed-manager] 2026-04-09 02:14:51.595116 | orchestrator | 2026-04-09 02:14:51.595128 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-09 02:14:51.595141 | orchestrator | Thursday 09 April 2026 02:14:45 +0000 (0:00:00.457) 0:00:12.333 ******** 2026-04-09 02:14:51.595154 | orchestrator | changed: [testbed-manager] 2026-04-09 02:14:51.595167 | orchestrator | 2026-04-09 02:14:51.595180 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-09 02:14:51.595192 | orchestrator | Thursday 09 April 2026 02:14:47 +0000 (0:00:01.260) 0:00:13.594 ******** 2026-04-09 02:14:51.595206 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-09 02:14:51.595219 | orchestrator | changed: [testbed-manager] 2026-04-09 02:14:51.595230 | orchestrator | 2026-04-09 02:14:51.595241 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-09 02:14:51.595252 | orchestrator | Thursday 09 April 2026 02:14:48 +0000 (0:00:01.029) 0:00:14.624 ******** 2026-04-09 02:14:51.595263 | orchestrator | changed: [testbed-manager] 2026-04-09 02:14:51.595274 | orchestrator | 2026-04-09 02:14:51.595286 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-09 02:14:51.595297 | orchestrator | Thursday 09 April 2026 02:14:50 +0000 (0:00:01.955) 0:00:16.580 ******** 2026-04-09 02:14:51.595308 | orchestrator | changed: [testbed-manager] 2026-04-09 02:14:51.595318 | orchestrator | 2026-04-09 02:14:51.595329 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:14:51.595341 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:14:51.595353 | orchestrator | 2026-04-09 02:14:51.595364 | orchestrator | 2026-04-09 02:14:51.595375 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:14:51.595386 | orchestrator | Thursday 09 April 2026 02:14:51 +0000 (0:00:01.039) 0:00:17.619 ******** 2026-04-09 02:14:51.595412 | orchestrator | =============================================================================== 2026-04-09 02:14:51.595423 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.55s 2026-04-09 02:14:51.595434 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.96s 2026-04-09 02:14:51.595445 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.73s 2026-04-09 02:14:51.595456 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.26s 2026-04-09 02:14:51.595466 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.04s 2026-04-09 02:14:51.595477 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.03s 2026-04-09 02:14:51.595487 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.74s 2026-04-09 02:14:51.595498 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.68s 2026-04-09 02:14:51.595509 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.49s 2026-04-09 02:14:51.595520 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.46s 2026-04-09 02:14:51.595531 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.45s 2026-04-09 02:14:51.977984 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-09 02:14:52.011263 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-09 02:14:52.011387 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-09 02:14:52.087157 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 197 0 --:--:-- --:--:-- --:--:-- 200 2026-04-09 02:14:52.107324 | orchestrator | + osism apply --environment custom workarounds 2026-04-09 02:14:54.312571 | orchestrator | 2026-04-09 02:14:54 | INFO  | Trying to run play workarounds in environment custom 2026-04-09 02:15:04.415628 | orchestrator | 2026-04-09 02:15:04 | INFO  | Task a00f5a4c-d7e6-4e66-9fe8-36280f60c0f0 (workarounds) was prepared for execution. 2026-04-09 02:15:04.415723 | orchestrator | 2026-04-09 02:15:04 | INFO  | It takes a moment until task a00f5a4c-d7e6-4e66-9fe8-36280f60c0f0 (workarounds) has been started and output is visible here. 2026-04-09 02:15:31.555785 | orchestrator | 2026-04-09 02:15:31.555881 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 02:15:31.555889 | orchestrator | 2026-04-09 02:15:31.555895 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-09 02:15:31.555900 | orchestrator | Thursday 09 April 2026 02:15:08 +0000 (0:00:00.131) 0:00:00.131 ******** 2026-04-09 02:15:31.555905 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-09 02:15:31.555910 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-09 02:15:31.555915 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-09 02:15:31.555919 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-09 02:15:31.555923 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-09 02:15:31.555927 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-09 02:15:31.555931 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-09 02:15:31.555936 | orchestrator | 2026-04-09 02:15:31.555940 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-09 02:15:31.555946 | orchestrator | 2026-04-09 02:15:31.555987 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-09 02:15:31.555993 | orchestrator | Thursday 09 April 2026 02:15:09 +0000 (0:00:00.928) 0:00:01.060 ******** 2026-04-09 02:15:31.555998 | orchestrator | ok: [testbed-manager] 2026-04-09 02:15:31.556003 | orchestrator | 2026-04-09 02:15:31.556067 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-09 02:15:31.556075 | orchestrator | 2026-04-09 02:15:31.556082 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-09 02:15:31.556089 | orchestrator | Thursday 09 April 2026 02:15:12 +0000 (0:00:02.714) 0:00:03.774 ******** 2026-04-09 02:15:31.556096 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:15:31.556104 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:15:31.556108 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:15:31.556113 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:15:31.556117 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:15:31.556121 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:15:31.556125 | orchestrator | 2026-04-09 02:15:31.556129 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-09 02:15:31.556133 | orchestrator | 2026-04-09 02:15:31.556137 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-09 02:15:31.556153 | orchestrator | Thursday 09 April 2026 02:15:14 +0000 (0:00:01.985) 0:00:05.760 ******** 2026-04-09 02:15:31.556159 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-09 02:15:31.556164 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-09 02:15:31.556168 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-09 02:15:31.556173 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-09 02:15:31.556177 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-09 02:15:31.556181 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-09 02:15:31.556185 | orchestrator | 2026-04-09 02:15:31.556189 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-09 02:15:31.556193 | orchestrator | Thursday 09 April 2026 02:15:16 +0000 (0:00:01.574) 0:00:07.335 ******** 2026-04-09 02:15:31.556198 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:15:31.556202 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:15:31.556206 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:15:31.556210 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:15:31.556214 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:15:31.556219 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:15:31.556223 | orchestrator | 2026-04-09 02:15:31.556227 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-09 02:15:31.556231 | orchestrator | Thursday 09 April 2026 02:15:19 +0000 (0:00:03.684) 0:00:11.019 ******** 2026-04-09 02:15:31.556235 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:15:31.556239 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:15:31.556244 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:15:31.556248 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:15:31.556252 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:15:31.556256 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:15:31.556260 | orchestrator | 2026-04-09 02:15:31.556265 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-09 02:15:31.556269 | orchestrator | 2026-04-09 02:15:31.556273 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-09 02:15:31.556277 | orchestrator | Thursday 09 April 2026 02:15:20 +0000 (0:00:00.752) 0:00:11.772 ******** 2026-04-09 02:15:31.556281 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:15:31.556285 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:15:31.556289 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:15:31.556294 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:15:31.556298 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:15:31.556302 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:15:31.556306 | orchestrator | changed: [testbed-manager] 2026-04-09 02:15:31.556316 | orchestrator | 2026-04-09 02:15:31.556327 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-09 02:15:31.556331 | orchestrator | Thursday 09 April 2026 02:15:22 +0000 (0:00:01.707) 0:00:13.479 ******** 2026-04-09 02:15:31.556336 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:15:31.556345 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:15:31.556350 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:15:31.556355 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:15:31.556360 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:15:31.556364 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:15:31.556381 | orchestrator | changed: [testbed-manager] 2026-04-09 02:15:31.556386 | orchestrator | 2026-04-09 02:15:31.556391 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-09 02:15:31.556396 | orchestrator | Thursday 09 April 2026 02:15:24 +0000 (0:00:01.786) 0:00:15.266 ******** 2026-04-09 02:15:31.556401 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:15:31.556406 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:15:31.556410 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:15:31.556415 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:15:31.556420 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:15:31.556425 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:15:31.556429 | orchestrator | ok: [testbed-manager] 2026-04-09 02:15:31.556434 | orchestrator | 2026-04-09 02:15:31.556439 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-09 02:15:31.556444 | orchestrator | Thursday 09 April 2026 02:15:25 +0000 (0:00:01.686) 0:00:16.952 ******** 2026-04-09 02:15:31.556448 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:15:31.556453 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:15:31.556458 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:15:31.556463 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:15:31.556467 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:15:31.556472 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:15:31.556477 | orchestrator | changed: [testbed-manager] 2026-04-09 02:15:31.556482 | orchestrator | 2026-04-09 02:15:31.556487 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-09 02:15:31.556491 | orchestrator | Thursday 09 April 2026 02:15:27 +0000 (0:00:02.054) 0:00:19.007 ******** 2026-04-09 02:15:31.556496 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:15:31.556501 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:15:31.556506 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:15:31.556510 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:15:31.556515 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:15:31.556520 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:15:31.556525 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:15:31.556530 | orchestrator | 2026-04-09 02:15:31.556535 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-09 02:15:31.556540 | orchestrator | 2026-04-09 02:15:31.556545 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-09 02:15:31.556550 | orchestrator | Thursday 09 April 2026 02:15:28 +0000 (0:00:00.660) 0:00:19.667 ******** 2026-04-09 02:15:31.556554 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:15:31.556559 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:15:31.556564 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:15:31.556569 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:15:31.556574 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:15:31.556578 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:15:31.556587 | orchestrator | ok: [testbed-manager] 2026-04-09 02:15:31.556591 | orchestrator | 2026-04-09 02:15:31.556596 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:15:31.556602 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 02:15:31.556608 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:15:31.556617 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:15:31.556623 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:15:31.556628 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:15:31.556633 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:15:31.556638 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:15:31.556642 | orchestrator | 2026-04-09 02:15:31.556647 | orchestrator | 2026-04-09 02:15:31.556652 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:15:31.556656 | orchestrator | Thursday 09 April 2026 02:15:31 +0000 (0:00:03.000) 0:00:22.668 ******** 2026-04-09 02:15:31.556661 | orchestrator | =============================================================================== 2026-04-09 02:15:31.556665 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.68s 2026-04-09 02:15:31.556669 | orchestrator | Install python3-docker -------------------------------------------------- 3.00s 2026-04-09 02:15:31.556673 | orchestrator | Apply netplan configuration --------------------------------------------- 2.71s 2026-04-09 02:15:31.556678 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 2.06s 2026-04-09 02:15:31.556682 | orchestrator | Apply netplan configuration --------------------------------------------- 1.99s 2026-04-09 02:15:31.556686 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.79s 2026-04-09 02:15:31.556690 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.71s 2026-04-09 02:15:31.556695 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.69s 2026-04-09 02:15:31.556699 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.58s 2026-04-09 02:15:31.556703 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.93s 2026-04-09 02:15:31.556707 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.75s 2026-04-09 02:15:31.556714 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.66s 2026-04-09 02:15:32.361709 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-09 02:15:44.600941 | orchestrator | 2026-04-09 02:15:44 | INFO  | Task 6635f39a-6510-426e-a316-c832a571ae3a (reboot) was prepared for execution. 2026-04-09 02:15:44.601131 | orchestrator | 2026-04-09 02:15:44 | INFO  | It takes a moment until task 6635f39a-6510-426e-a316-c832a571ae3a (reboot) has been started and output is visible here. 2026-04-09 02:15:55.433855 | orchestrator | 2026-04-09 02:15:55.433947 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-09 02:15:55.433956 | orchestrator | 2026-04-09 02:15:55.433961 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-09 02:15:55.433967 | orchestrator | Thursday 09 April 2026 02:15:49 +0000 (0:00:00.218) 0:00:00.218 ******** 2026-04-09 02:15:55.433972 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:15:55.433977 | orchestrator | 2026-04-09 02:15:55.433982 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-09 02:15:55.433987 | orchestrator | Thursday 09 April 2026 02:15:49 +0000 (0:00:00.112) 0:00:00.331 ******** 2026-04-09 02:15:55.433992 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:15:55.433996 | orchestrator | 2026-04-09 02:15:55.434001 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-09 02:15:55.434080 | orchestrator | Thursday 09 April 2026 02:15:50 +0000 (0:00:00.950) 0:00:01.282 ******** 2026-04-09 02:15:55.434087 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:15:55.434092 | orchestrator | 2026-04-09 02:15:55.434096 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-09 02:15:55.434101 | orchestrator | 2026-04-09 02:15:55.434105 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-09 02:15:55.434110 | orchestrator | Thursday 09 April 2026 02:15:50 +0000 (0:00:00.131) 0:00:01.413 ******** 2026-04-09 02:15:55.434114 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:15:55.434119 | orchestrator | 2026-04-09 02:15:55.434123 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-09 02:15:55.434128 | orchestrator | Thursday 09 April 2026 02:15:50 +0000 (0:00:00.120) 0:00:01.534 ******** 2026-04-09 02:15:55.434147 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:15:55.434152 | orchestrator | 2026-04-09 02:15:55.434157 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-09 02:15:55.434173 | orchestrator | Thursday 09 April 2026 02:15:51 +0000 (0:00:00.668) 0:00:02.202 ******** 2026-04-09 02:15:55.434177 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:15:55.434182 | orchestrator | 2026-04-09 02:15:55.434186 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-09 02:15:55.434190 | orchestrator | 2026-04-09 02:15:55.434195 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-09 02:15:55.434199 | orchestrator | Thursday 09 April 2026 02:15:51 +0000 (0:00:00.114) 0:00:02.317 ******** 2026-04-09 02:15:55.434204 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:15:55.434208 | orchestrator | 2026-04-09 02:15:55.434212 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-09 02:15:55.434217 | orchestrator | Thursday 09 April 2026 02:15:51 +0000 (0:00:00.242) 0:00:02.560 ******** 2026-04-09 02:15:55.434221 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:15:55.434226 | orchestrator | 2026-04-09 02:15:55.434230 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-09 02:15:55.434235 | orchestrator | Thursday 09 April 2026 02:15:52 +0000 (0:00:00.675) 0:00:03.235 ******** 2026-04-09 02:15:55.434239 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:15:55.434243 | orchestrator | 2026-04-09 02:15:55.434248 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-09 02:15:55.434253 | orchestrator | 2026-04-09 02:15:55.434257 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-09 02:15:55.434261 | orchestrator | Thursday 09 April 2026 02:15:52 +0000 (0:00:00.139) 0:00:03.375 ******** 2026-04-09 02:15:55.434266 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:15:55.434270 | orchestrator | 2026-04-09 02:15:55.434275 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-09 02:15:55.434279 | orchestrator | Thursday 09 April 2026 02:15:52 +0000 (0:00:00.118) 0:00:03.493 ******** 2026-04-09 02:15:55.434283 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:15:55.434288 | orchestrator | 2026-04-09 02:15:55.434292 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-09 02:15:55.434297 | orchestrator | Thursday 09 April 2026 02:15:53 +0000 (0:00:00.680) 0:00:04.173 ******** 2026-04-09 02:15:55.434301 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:15:55.434305 | orchestrator | 2026-04-09 02:15:55.434310 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-09 02:15:55.434321 | orchestrator | 2026-04-09 02:15:55.434326 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-09 02:15:55.434330 | orchestrator | Thursday 09 April 2026 02:15:53 +0000 (0:00:00.131) 0:00:04.305 ******** 2026-04-09 02:15:55.434335 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:15:55.434339 | orchestrator | 2026-04-09 02:15:55.434343 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-09 02:15:55.434348 | orchestrator | Thursday 09 April 2026 02:15:53 +0000 (0:00:00.120) 0:00:04.426 ******** 2026-04-09 02:15:55.434357 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:15:55.434362 | orchestrator | 2026-04-09 02:15:55.434366 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-09 02:15:55.434371 | orchestrator | Thursday 09 April 2026 02:15:54 +0000 (0:00:00.657) 0:00:05.083 ******** 2026-04-09 02:15:55.434375 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:15:55.434379 | orchestrator | 2026-04-09 02:15:55.434384 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-09 02:15:55.434389 | orchestrator | 2026-04-09 02:15:55.434393 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-09 02:15:55.434397 | orchestrator | Thursday 09 April 2026 02:15:54 +0000 (0:00:00.119) 0:00:05.203 ******** 2026-04-09 02:15:55.434402 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:15:55.434406 | orchestrator | 2026-04-09 02:15:55.434411 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-09 02:15:55.434415 | orchestrator | Thursday 09 April 2026 02:15:54 +0000 (0:00:00.103) 0:00:05.307 ******** 2026-04-09 02:15:55.434419 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:15:55.434424 | orchestrator | 2026-04-09 02:15:55.434428 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-09 02:15:55.434433 | orchestrator | Thursday 09 April 2026 02:15:54 +0000 (0:00:00.656) 0:00:05.963 ******** 2026-04-09 02:15:55.434450 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:15:55.434454 | orchestrator | 2026-04-09 02:15:55.434459 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:15:55.434464 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:15:55.434470 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:15:55.434474 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:15:55.434479 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:15:55.434483 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:15:55.434488 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:15:55.434492 | orchestrator | 2026-04-09 02:15:55.434497 | orchestrator | 2026-04-09 02:15:55.434501 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:15:55.434505 | orchestrator | Thursday 09 April 2026 02:15:55 +0000 (0:00:00.043) 0:00:06.007 ******** 2026-04-09 02:15:55.434512 | orchestrator | =============================================================================== 2026-04-09 02:15:55.434517 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.29s 2026-04-09 02:15:55.434521 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.82s 2026-04-09 02:15:55.434526 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.68s 2026-04-09 02:15:55.844159 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-09 02:16:08.118845 | orchestrator | 2026-04-09 02:16:08 | INFO  | Task 8491c9c8-14ec-4659-9889-5fb9175be4c8 (wait-for-connection) was prepared for execution. 2026-04-09 02:16:08.118948 | orchestrator | 2026-04-09 02:16:08 | INFO  | It takes a moment until task 8491c9c8-14ec-4659-9889-5fb9175be4c8 (wait-for-connection) has been started and output is visible here. 2026-04-09 02:16:24.869900 | orchestrator | 2026-04-09 02:16:24.870102 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-09 02:16:24.870158 | orchestrator | 2026-04-09 02:16:24.870173 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-09 02:16:24.870185 | orchestrator | Thursday 09 April 2026 02:16:12 +0000 (0:00:00.284) 0:00:00.284 ******** 2026-04-09 02:16:24.870196 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:16:24.870209 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:16:24.870220 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:16:24.870231 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:16:24.870242 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:16:24.870253 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:16:24.870264 | orchestrator | 2026-04-09 02:16:24.870275 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:16:24.870287 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:16:24.870300 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:16:24.870311 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:16:24.870322 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:16:24.870333 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:16:24.870344 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:16:24.870355 | orchestrator | 2026-04-09 02:16:24.870367 | orchestrator | 2026-04-09 02:16:24.870387 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:16:24.870415 | orchestrator | Thursday 09 April 2026 02:16:24 +0000 (0:00:11.574) 0:00:11.859 ******** 2026-04-09 02:16:24.870437 | orchestrator | =============================================================================== 2026-04-09 02:16:24.870456 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.58s 2026-04-09 02:16:25.222229 | orchestrator | + osism apply hddtemp 2026-04-09 02:16:37.493950 | orchestrator | 2026-04-09 02:16:37 | INFO  | Task 4c9a4e27-9500-44c4-a698-ae0d07ed33ca (hddtemp) was prepared for execution. 2026-04-09 02:16:37.494187 | orchestrator | 2026-04-09 02:16:37 | INFO  | It takes a moment until task 4c9a4e27-9500-44c4-a698-ae0d07ed33ca (hddtemp) has been started and output is visible here. 2026-04-09 02:17:06.848519 | orchestrator | 2026-04-09 02:17:06.848637 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-09 02:17:06.848651 | orchestrator | 2026-04-09 02:17:06.848661 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-09 02:17:06.848670 | orchestrator | Thursday 09 April 2026 02:16:42 +0000 (0:00:00.282) 0:00:00.282 ******** 2026-04-09 02:17:06.848679 | orchestrator | ok: [testbed-manager] 2026-04-09 02:17:06.848688 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:17:06.848696 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:17:06.848704 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:17:06.848712 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:17:06.848721 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:17:06.848729 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:17:06.848737 | orchestrator | 2026-04-09 02:17:06.848746 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-09 02:17:06.848754 | orchestrator | Thursday 09 April 2026 02:16:43 +0000 (0:00:00.828) 0:00:01.110 ******** 2026-04-09 02:17:06.848763 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 02:17:06.848797 | orchestrator | 2026-04-09 02:17:06.848805 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-09 02:17:06.848814 | orchestrator | Thursday 09 April 2026 02:16:44 +0000 (0:00:01.357) 0:00:02.468 ******** 2026-04-09 02:17:06.848822 | orchestrator | ok: [testbed-manager] 2026-04-09 02:17:06.848829 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:17:06.848837 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:17:06.848845 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:17:06.848853 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:17:06.848861 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:17:06.848870 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:17:06.848878 | orchestrator | 2026-04-09 02:17:06.848886 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-09 02:17:06.848906 | orchestrator | Thursday 09 April 2026 02:16:46 +0000 (0:00:01.942) 0:00:04.411 ******** 2026-04-09 02:17:06.848915 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:17:06.848923 | orchestrator | changed: [testbed-manager] 2026-04-09 02:17:06.848931 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:17:06.848939 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:17:06.848947 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:17:06.848954 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:17:06.848962 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:17:06.848970 | orchestrator | 2026-04-09 02:17:06.848978 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-09 02:17:06.848986 | orchestrator | Thursday 09 April 2026 02:16:48 +0000 (0:00:01.248) 0:00:05.660 ******** 2026-04-09 02:17:06.848993 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:17:06.849001 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:17:06.849009 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:17:06.849017 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:17:06.849056 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:17:06.849069 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:17:06.849079 | orchestrator | ok: [testbed-manager] 2026-04-09 02:17:06.849089 | orchestrator | 2026-04-09 02:17:06.849099 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-09 02:17:06.849109 | orchestrator | Thursday 09 April 2026 02:16:49 +0000 (0:00:01.278) 0:00:06.938 ******** 2026-04-09 02:17:06.849119 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:17:06.849129 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:17:06.849138 | orchestrator | changed: [testbed-manager] 2026-04-09 02:17:06.849148 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:17:06.849157 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:17:06.849167 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:17:06.849176 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:17:06.849186 | orchestrator | 2026-04-09 02:17:06.849195 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-09 02:17:06.849204 | orchestrator | Thursday 09 April 2026 02:16:50 +0000 (0:00:00.950) 0:00:07.888 ******** 2026-04-09 02:17:06.849214 | orchestrator | changed: [testbed-manager] 2026-04-09 02:17:06.849223 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:17:06.849233 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:17:06.849242 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:17:06.849252 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:17:06.849261 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:17:06.849270 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:17:06.849280 | orchestrator | 2026-04-09 02:17:06.849290 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-09 02:17:06.849299 | orchestrator | Thursday 09 April 2026 02:17:02 +0000 (0:00:12.551) 0:00:20.440 ******** 2026-04-09 02:17:06.849309 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 02:17:06.849319 | orchestrator | 2026-04-09 02:17:06.849337 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-09 02:17:06.849345 | orchestrator | Thursday 09 April 2026 02:17:04 +0000 (0:00:01.582) 0:00:22.023 ******** 2026-04-09 02:17:06.849353 | orchestrator | changed: [testbed-manager] 2026-04-09 02:17:06.849361 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:17:06.849369 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:17:06.849377 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:17:06.849385 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:17:06.849393 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:17:06.849401 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:17:06.849409 | orchestrator | 2026-04-09 02:17:06.849417 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:17:06.849425 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:17:06.849451 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 02:17:06.849460 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 02:17:06.849468 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 02:17:06.849476 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 02:17:06.849484 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 02:17:06.849492 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 02:17:06.849500 | orchestrator | 2026-04-09 02:17:06.849508 | orchestrator | 2026-04-09 02:17:06.849516 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:17:06.849525 | orchestrator | Thursday 09 April 2026 02:17:06 +0000 (0:00:02.006) 0:00:24.030 ******** 2026-04-09 02:17:06.849532 | orchestrator | =============================================================================== 2026-04-09 02:17:06.849540 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.55s 2026-04-09 02:17:06.849548 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.01s 2026-04-09 02:17:06.849556 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.94s 2026-04-09 02:17:06.849569 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.58s 2026-04-09 02:17:06.849577 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.36s 2026-04-09 02:17:06.849585 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.28s 2026-04-09 02:17:06.849593 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.25s 2026-04-09 02:17:06.849600 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.95s 2026-04-09 02:17:06.849608 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.83s 2026-04-09 02:17:07.252255 | orchestrator | ++ semver 9.5.0 7.1.1 2026-04-09 02:17:07.300266 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-09 02:17:07.300396 | orchestrator | + sudo systemctl restart manager.service 2026-04-09 02:17:25.674917 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-09 02:17:25.675072 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-09 02:17:25.675093 | orchestrator | + local max_attempts=60 2026-04-09 02:17:25.675108 | orchestrator | + local name=ceph-ansible 2026-04-09 02:17:25.675119 | orchestrator | + local attempt_num=1 2026-04-09 02:17:25.675131 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 02:17:25.709240 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 02:17:25.709333 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 02:17:25.709346 | orchestrator | + sleep 5 2026-04-09 02:17:30.716101 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 02:17:30.760689 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 02:17:30.760779 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 02:17:30.760791 | orchestrator | + sleep 5 2026-04-09 02:17:35.763634 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 02:17:35.812228 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 02:17:35.812316 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 02:17:35.812331 | orchestrator | + sleep 5 2026-04-09 02:17:40.815886 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 02:17:40.856529 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 02:17:40.856612 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 02:17:40.856623 | orchestrator | + sleep 5 2026-04-09 02:17:45.860418 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 02:17:45.908552 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 02:17:45.908843 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 02:17:45.908871 | orchestrator | + sleep 5 2026-04-09 02:17:50.913500 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 02:17:50.943129 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 02:17:50.943189 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 02:17:50.943197 | orchestrator | + sleep 5 2026-04-09 02:17:55.948564 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 02:17:55.991717 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 02:17:55.991790 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 02:17:55.991797 | orchestrator | + sleep 5 2026-04-09 02:18:00.996142 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 02:18:01.052546 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-09 02:18:01.052617 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 02:18:01.052624 | orchestrator | + sleep 5 2026-04-09 02:18:06.055728 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 02:18:06.095996 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-09 02:18:06.096120 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 02:18:06.096134 | orchestrator | + sleep 5 2026-04-09 02:18:11.098877 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 02:18:11.129926 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-09 02:18:11.130003 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 02:18:11.130010 | orchestrator | + sleep 5 2026-04-09 02:18:16.133998 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 02:18:16.173994 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-09 02:18:16.174181 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 02:18:16.174199 | orchestrator | + sleep 5 2026-04-09 02:18:21.178947 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 02:18:21.208470 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-09 02:18:21.208544 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 02:18:21.208551 | orchestrator | + sleep 5 2026-04-09 02:18:26.212893 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 02:18:26.262929 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-09 02:18:26.263025 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 02:18:26.263060 | orchestrator | + sleep 5 2026-04-09 02:18:31.267772 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 02:18:31.300493 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 02:18:31.300593 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-09 02:18:31.300605 | orchestrator | + local max_attempts=60 2026-04-09 02:18:31.300615 | orchestrator | + local name=kolla-ansible 2026-04-09 02:18:31.300623 | orchestrator | + local attempt_num=1 2026-04-09 02:18:31.300927 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-09 02:18:31.332461 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 02:18:31.332536 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-09 02:18:31.332546 | orchestrator | + local max_attempts=60 2026-04-09 02:18:31.332577 | orchestrator | + local name=osism-ansible 2026-04-09 02:18:31.332585 | orchestrator | + local attempt_num=1 2026-04-09 02:18:31.333873 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-09 02:18:31.373710 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 02:18:31.373834 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-09 02:18:31.373864 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-09 02:18:31.569023 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-09 02:18:31.743230 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-09 02:18:31.912478 | orchestrator | ARA in osism-ansible already disabled. 2026-04-09 02:18:32.068086 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-09 02:18:32.068547 | orchestrator | + osism apply gather-facts 2026-04-09 02:18:44.622202 | orchestrator | 2026-04-09 02:18:44 | INFO  | Task 5238b864-4064-424e-a794-1d25cd106687 (gather-facts) was prepared for execution. 2026-04-09 02:18:44.622312 | orchestrator | 2026-04-09 02:18:44 | INFO  | It takes a moment until task 5238b864-4064-424e-a794-1d25cd106687 (gather-facts) has been started and output is visible here. 2026-04-09 02:18:59.025417 | orchestrator | 2026-04-09 02:18:59.025504 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-09 02:18:59.025511 | orchestrator | 2026-04-09 02:18:59.025516 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 02:18:59.025521 | orchestrator | Thursday 09 April 2026 02:18:49 +0000 (0:00:00.244) 0:00:00.244 ******** 2026-04-09 02:18:59.025525 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:18:59.025532 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:18:59.025536 | orchestrator | ok: [testbed-manager] 2026-04-09 02:18:59.025540 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:18:59.025544 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:18:59.025548 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:18:59.025552 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:18:59.025556 | orchestrator | 2026-04-09 02:18:59.025560 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-09 02:18:59.025564 | orchestrator | 2026-04-09 02:18:59.025568 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-09 02:18:59.025572 | orchestrator | Thursday 09 April 2026 02:18:57 +0000 (0:00:08.506) 0:00:08.751 ******** 2026-04-09 02:18:59.025576 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:18:59.025581 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:18:59.025585 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:18:59.025589 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:18:59.025592 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:18:59.025596 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:18:59.025601 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:18:59.025607 | orchestrator | 2026-04-09 02:18:59.025613 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:18:59.025617 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 02:18:59.025622 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 02:18:59.025626 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 02:18:59.025630 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 02:18:59.025633 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 02:18:59.025637 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 02:18:59.025641 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 02:18:59.025665 | orchestrator | 2026-04-09 02:18:59.025671 | orchestrator | 2026-04-09 02:18:59.025676 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:18:59.025682 | orchestrator | Thursday 09 April 2026 02:18:58 +0000 (0:00:00.601) 0:00:09.353 ******** 2026-04-09 02:18:59.025702 | orchestrator | =============================================================================== 2026-04-09 02:18:59.025715 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.51s 2026-04-09 02:18:59.025721 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.60s 2026-04-09 02:18:59.407794 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-09 02:18:59.426125 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-09 02:18:59.445504 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-09 02:18:59.461789 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-09 02:18:59.477159 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-09 02:18:59.497433 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-09 02:18:59.516311 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-09 02:18:59.534242 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-09 02:18:59.552823 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-09 02:18:59.567483 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-09 02:18:59.584940 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-09 02:18:59.610356 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-09 02:18:59.632185 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-09 02:18:59.652179 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-09 02:18:59.672009 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-09 02:18:59.688204 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-09 02:18:59.702014 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-09 02:18:59.720206 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-09 02:18:59.738138 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-09 02:18:59.752514 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amphora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-09 02:18:59.767096 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-09 02:18:59.787268 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-09 02:18:59.805305 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-09 02:18:59.820480 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-09 02:19:00.071154 | orchestrator | ok: Runtime: 0:25:33.694689 2026-04-09 02:19:00.171384 | 2026-04-09 02:19:00.171532 | TASK [Deploy services] 2026-04-09 02:19:00.867400 | orchestrator | 2026-04-09 02:19:00.867603 | orchestrator | # DEPLOY SERVICES 2026-04-09 02:19:00.867629 | orchestrator | 2026-04-09 02:19:00.867642 | orchestrator | + set -e 2026-04-09 02:19:00.867653 | orchestrator | + echo 2026-04-09 02:19:00.867664 | orchestrator | + echo '# DEPLOY SERVICES' 2026-04-09 02:19:00.867675 | orchestrator | + echo 2026-04-09 02:19:00.867727 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 02:19:00.867747 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 02:19:00.867760 | orchestrator | ++ INTERACTIVE=false 2026-04-09 02:19:00.867770 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 02:19:00.867787 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 02:19:00.867796 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 02:19:00.867809 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 02:19:00.867818 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 02:19:00.867832 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-09 02:19:00.867840 | orchestrator | ++ CEPH_VERSION=reef 2026-04-09 02:19:00.867852 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 02:19:00.867862 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 02:19:00.867873 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-09 02:19:00.867882 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-09 02:19:00.867891 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-09 02:19:00.867904 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-09 02:19:00.867912 | orchestrator | ++ export ARA=false 2026-04-09 02:19:00.867921 | orchestrator | ++ ARA=false 2026-04-09 02:19:00.867930 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 02:19:00.867939 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 02:19:00.867947 | orchestrator | ++ export TEMPEST=false 2026-04-09 02:19:00.867956 | orchestrator | ++ TEMPEST=false 2026-04-09 02:19:00.867964 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 02:19:00.867973 | orchestrator | ++ IS_ZUUL=true 2026-04-09 02:19:00.867982 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 02:19:00.867991 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 02:19:00.868000 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 02:19:00.868008 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 02:19:00.868017 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 02:19:00.868025 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 02:19:00.868034 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 02:19:00.868070 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 02:19:00.868084 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 02:19:00.868100 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 02:19:00.868110 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-04-09 02:19:00.877599 | orchestrator | + set -e 2026-04-09 02:19:00.877701 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 02:19:00.877718 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 02:19:00.877730 | orchestrator | ++ INTERACTIVE=false 2026-04-09 02:19:00.877741 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 02:19:00.877751 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 02:19:00.877762 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 02:19:00.877773 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 02:19:00.877796 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 02:19:00.877807 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-09 02:19:00.877818 | orchestrator | ++ CEPH_VERSION=reef 2026-04-09 02:19:00.877828 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 02:19:00.877839 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 02:19:00.877850 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-09 02:19:00.877861 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-09 02:19:00.877872 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-09 02:19:00.877883 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-09 02:19:00.877894 | orchestrator | ++ export ARA=false 2026-04-09 02:19:00.877905 | orchestrator | ++ ARA=false 2026-04-09 02:19:00.877915 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 02:19:00.877926 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 02:19:00.877936 | orchestrator | ++ export TEMPEST=false 2026-04-09 02:19:00.877951 | orchestrator | ++ TEMPEST=false 2026-04-09 02:19:00.877962 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 02:19:00.877973 | orchestrator | ++ IS_ZUUL=true 2026-04-09 02:19:00.877987 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 02:19:00.877999 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 02:19:00.878009 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 02:19:00.878106 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 02:19:00.878118 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 02:19:00.878129 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 02:19:00.878140 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 02:19:00.878151 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 02:19:00.878191 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 02:19:00.878203 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 02:19:00.878214 | orchestrator | 2026-04-09 02:19:00.878227 | orchestrator | # PULL IMAGES 2026-04-09 02:19:00.878245 | orchestrator | 2026-04-09 02:19:00.878267 | orchestrator | + echo 2026-04-09 02:19:00.878294 | orchestrator | + echo '# PULL IMAGES' 2026-04-09 02:19:00.878311 | orchestrator | + echo 2026-04-09 02:19:00.880289 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-09 02:19:00.948153 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-09 02:19:00.948270 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-09 02:19:03.020201 | orchestrator | 2026-04-09 02:19:03 | INFO  | Trying to run play pull-images in environment custom 2026-04-09 02:19:13.103209 | orchestrator | 2026-04-09 02:19:13 | INFO  | Task 0bf4d512-aaf7-4b38-8012-2ef5b69c6624 (pull-images) was prepared for execution. 2026-04-09 02:19:13.103368 | orchestrator | 2026-04-09 02:19:13 | INFO  | Task 0bf4d512-aaf7-4b38-8012-2ef5b69c6624 is running in background. No more output. Check ARA for logs. 2026-04-09 02:19:13.491620 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-04-09 02:19:25.693560 | orchestrator | 2026-04-09 02:19:25 | INFO  | Task e9cc8c26-4e51-454a-be70-f7c9d3bdc4c2 (cgit) was prepared for execution. 2026-04-09 02:19:25.693715 | orchestrator | 2026-04-09 02:19:25 | INFO  | Task e9cc8c26-4e51-454a-be70-f7c9d3bdc4c2 is running in background. No more output. Check ARA for logs. 2026-04-09 02:19:38.743152 | orchestrator | 2026-04-09 02:19:38 | INFO  | Task 05cc66b1-1bd0-4687-a598-fae2615da85d (dotfiles) was prepared for execution. 2026-04-09 02:19:38.743268 | orchestrator | 2026-04-09 02:19:38 | INFO  | Task 05cc66b1-1bd0-4687-a598-fae2615da85d is running in background. No more output. Check ARA for logs. 2026-04-09 02:19:51.502191 | orchestrator | 2026-04-09 02:19:51 | INFO  | Task bc553d8d-4fad-4bf2-9b7b-0af0318f8be8 (homer) was prepared for execution. 2026-04-09 02:19:51.502311 | orchestrator | 2026-04-09 02:19:51 | INFO  | Task bc553d8d-4fad-4bf2-9b7b-0af0318f8be8 is running in background. No more output. Check ARA for logs. 2026-04-09 02:20:04.285868 | orchestrator | 2026-04-09 02:20:04 | INFO  | Task f5aa4b90-5253-47ee-9f4d-57adac419266 (phpmyadmin) was prepared for execution. 2026-04-09 02:20:04.285961 | orchestrator | 2026-04-09 02:20:04 | INFO  | Task f5aa4b90-5253-47ee-9f4d-57adac419266 is running in background. No more output. Check ARA for logs. 2026-04-09 02:20:17.178733 | orchestrator | 2026-04-09 02:20:17 | INFO  | Task 9ef8c530-e68d-4e86-9470-f6246d7cb593 (sosreport) was prepared for execution. 2026-04-09 02:20:17.178845 | orchestrator | 2026-04-09 02:20:17 | INFO  | Task 9ef8c530-e68d-4e86-9470-f6246d7cb593 is running in background. No more output. Check ARA for logs. 2026-04-09 02:20:17.564778 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-04-09 02:20:17.571958 | orchestrator | + set -e 2026-04-09 02:20:17.572040 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 02:20:17.572054 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 02:20:17.572125 | orchestrator | ++ INTERACTIVE=false 2026-04-09 02:20:17.572139 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 02:20:17.572149 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 02:20:17.572160 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 02:20:17.572170 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 02:20:17.572180 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 02:20:17.572191 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-09 02:20:17.572201 | orchestrator | ++ CEPH_VERSION=reef 2026-04-09 02:20:17.572211 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 02:20:17.572222 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 02:20:17.572232 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-09 02:20:17.572243 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-09 02:20:17.572253 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-09 02:20:17.572263 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-09 02:20:17.572273 | orchestrator | ++ export ARA=false 2026-04-09 02:20:17.572283 | orchestrator | ++ ARA=false 2026-04-09 02:20:17.572293 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 02:20:17.572332 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 02:20:17.572342 | orchestrator | ++ export TEMPEST=false 2026-04-09 02:20:17.572353 | orchestrator | ++ TEMPEST=false 2026-04-09 02:20:17.572363 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 02:20:17.572372 | orchestrator | ++ IS_ZUUL=true 2026-04-09 02:20:17.572397 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 02:20:17.572413 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 02:20:17.572423 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 02:20:17.572434 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 02:20:17.572443 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 02:20:17.572453 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 02:20:17.572463 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 02:20:17.572474 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 02:20:17.572483 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 02:20:17.572494 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 02:20:17.572746 | orchestrator | ++ semver 9.5.0 8.0.3 2026-04-09 02:20:17.621958 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-09 02:20:17.622134 | orchestrator | + osism apply frr 2026-04-09 02:20:30.046896 | orchestrator | 2026-04-09 02:20:30 | INFO  | Task 39cae66e-8416-44f5-9dc2-82b54fffa19a (frr) was prepared for execution. 2026-04-09 02:20:30.047006 | orchestrator | 2026-04-09 02:20:30 | INFO  | It takes a moment until task 39cae66e-8416-44f5-9dc2-82b54fffa19a (frr) has been started and output is visible here. 2026-04-09 02:21:11.836445 | orchestrator | 2026-04-09 02:21:11.836580 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-09 02:21:11.836598 | orchestrator | 2026-04-09 02:21:11.836609 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-09 02:21:11.836627 | orchestrator | Thursday 09 April 2026 02:20:38 +0000 (0:00:00.521) 0:00:00.521 ******** 2026-04-09 02:21:11.836643 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 02:21:11.836661 | orchestrator | 2026-04-09 02:21:11.836678 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-09 02:21:11.836700 | orchestrator | Thursday 09 April 2026 02:20:39 +0000 (0:00:00.890) 0:00:01.412 ******** 2026-04-09 02:21:11.836721 | orchestrator | changed: [testbed-manager] 2026-04-09 02:21:11.836739 | orchestrator | 2026-04-09 02:21:11.836755 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-09 02:21:11.836774 | orchestrator | Thursday 09 April 2026 02:20:42 +0000 (0:00:03.700) 0:00:05.113 ******** 2026-04-09 02:21:11.836790 | orchestrator | changed: [testbed-manager] 2026-04-09 02:21:11.836805 | orchestrator | 2026-04-09 02:21:11.836822 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-09 02:21:11.836838 | orchestrator | Thursday 09 April 2026 02:20:58 +0000 (0:00:15.658) 0:00:20.772 ******** 2026-04-09 02:21:11.836855 | orchestrator | ok: [testbed-manager] 2026-04-09 02:21:11.836872 | orchestrator | 2026-04-09 02:21:11.836889 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-09 02:21:11.836906 | orchestrator | Thursday 09 April 2026 02:20:59 +0000 (0:00:01.318) 0:00:22.090 ******** 2026-04-09 02:21:11.836923 | orchestrator | changed: [testbed-manager] 2026-04-09 02:21:11.836940 | orchestrator | 2026-04-09 02:21:11.836957 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-09 02:21:11.836975 | orchestrator | Thursday 09 April 2026 02:21:00 +0000 (0:00:01.032) 0:00:23.123 ******** 2026-04-09 02:21:11.836992 | orchestrator | ok: [testbed-manager] 2026-04-09 02:21:11.837004 | orchestrator | 2026-04-09 02:21:11.837015 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-09 02:21:11.837028 | orchestrator | Thursday 09 April 2026 02:21:02 +0000 (0:00:01.436) 0:00:24.560 ******** 2026-04-09 02:21:11.837039 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:21:11.837050 | orchestrator | 2026-04-09 02:21:11.837093 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-09 02:21:11.837105 | orchestrator | Thursday 09 April 2026 02:21:02 +0000 (0:00:00.204) 0:00:24.764 ******** 2026-04-09 02:21:11.837140 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:21:11.837152 | orchestrator | 2026-04-09 02:21:11.837164 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-09 02:21:11.837175 | orchestrator | Thursday 09 April 2026 02:21:02 +0000 (0:00:00.179) 0:00:24.944 ******** 2026-04-09 02:21:11.837186 | orchestrator | changed: [testbed-manager] 2026-04-09 02:21:11.837196 | orchestrator | 2026-04-09 02:21:11.837205 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-09 02:21:11.837215 | orchestrator | Thursday 09 April 2026 02:21:03 +0000 (0:00:01.086) 0:00:26.030 ******** 2026-04-09 02:21:11.837224 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-09 02:21:11.837234 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-09 02:21:11.837246 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-09 02:21:11.837256 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-09 02:21:11.837265 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-09 02:21:11.837275 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-09 02:21:11.837284 | orchestrator | 2026-04-09 02:21:11.837294 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-09 02:21:11.837303 | orchestrator | Thursday 09 April 2026 02:21:07 +0000 (0:00:03.683) 0:00:29.714 ******** 2026-04-09 02:21:11.837313 | orchestrator | ok: [testbed-manager] 2026-04-09 02:21:11.837322 | orchestrator | 2026-04-09 02:21:11.837332 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-09 02:21:11.837341 | orchestrator | Thursday 09 April 2026 02:21:09 +0000 (0:00:02.016) 0:00:31.730 ******** 2026-04-09 02:21:11.837350 | orchestrator | changed: [testbed-manager] 2026-04-09 02:21:11.837360 | orchestrator | 2026-04-09 02:21:11.837369 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:21:11.837379 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:21:11.837389 | orchestrator | 2026-04-09 02:21:11.837399 | orchestrator | 2026-04-09 02:21:11.837416 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:21:11.837425 | orchestrator | Thursday 09 April 2026 02:21:11 +0000 (0:00:01.845) 0:00:33.576 ******** 2026-04-09 02:21:11.837434 | orchestrator | =============================================================================== 2026-04-09 02:21:11.837444 | orchestrator | osism.services.frr : Install frr package ------------------------------- 15.66s 2026-04-09 02:21:11.837453 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 3.70s 2026-04-09 02:21:11.837463 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.68s 2026-04-09 02:21:11.837472 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.02s 2026-04-09 02:21:11.837482 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.85s 2026-04-09 02:21:11.837512 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.44s 2026-04-09 02:21:11.837522 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.32s 2026-04-09 02:21:11.837531 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.09s 2026-04-09 02:21:11.837541 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.03s 2026-04-09 02:21:11.837550 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.89s 2026-04-09 02:21:11.837559 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.20s 2026-04-09 02:21:11.837569 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.18s 2026-04-09 02:21:12.267992 | orchestrator | + osism apply kubernetes 2026-04-09 02:21:15.096679 | orchestrator | 2026-04-09 02:21:15 | INFO  | Task 73240488-127a-4fc1-9c4d-73d8f87b9d80 (kubernetes) was prepared for execution. 2026-04-09 02:21:15.096762 | orchestrator | 2026-04-09 02:21:15 | INFO  | It takes a moment until task 73240488-127a-4fc1-9c4d-73d8f87b9d80 (kubernetes) has been started and output is visible here. 2026-04-09 02:21:42.583060 | orchestrator | 2026-04-09 02:21:42.583143 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-09 02:21:42.583152 | orchestrator | 2026-04-09 02:21:42.583159 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-09 02:21:42.583165 | orchestrator | Thursday 09 April 2026 02:21:20 +0000 (0:00:00.196) 0:00:00.197 ******** 2026-04-09 02:21:42.583171 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:21:42.583178 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:21:42.583183 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:21:42.583189 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:21:42.583194 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:21:42.583204 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:21:42.583213 | orchestrator | 2026-04-09 02:21:42.583225 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-09 02:21:42.583239 | orchestrator | Thursday 09 April 2026 02:21:21 +0000 (0:00:00.880) 0:00:01.077 ******** 2026-04-09 02:21:42.583249 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:21:42.583260 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:21:42.583270 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:21:42.583279 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:21:42.583289 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:21:42.583299 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:21:42.583309 | orchestrator | 2026-04-09 02:21:42.583319 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-09 02:21:42.583331 | orchestrator | Thursday 09 April 2026 02:21:22 +0000 (0:00:00.726) 0:00:01.803 ******** 2026-04-09 02:21:42.583342 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:21:42.583352 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:21:42.583362 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:21:42.583372 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:21:42.583378 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:21:42.583383 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:21:42.583389 | orchestrator | 2026-04-09 02:21:42.583394 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-09 02:21:42.583400 | orchestrator | Thursday 09 April 2026 02:21:23 +0000 (0:00:00.867) 0:00:02.671 ******** 2026-04-09 02:21:42.583406 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:21:42.583411 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:21:42.583417 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:21:42.583425 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:21:42.583430 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:21:42.583436 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:21:42.583441 | orchestrator | 2026-04-09 02:21:42.583446 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-09 02:21:42.583452 | orchestrator | Thursday 09 April 2026 02:21:25 +0000 (0:00:01.573) 0:00:04.245 ******** 2026-04-09 02:21:42.583457 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:21:42.583463 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:21:42.583468 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:21:42.583474 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:21:42.583479 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:21:42.583487 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:21:42.583495 | orchestrator | 2026-04-09 02:21:42.583504 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-09 02:21:42.583510 | orchestrator | Thursday 09 April 2026 02:21:26 +0000 (0:00:01.933) 0:00:06.178 ******** 2026-04-09 02:21:42.583515 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:21:42.583538 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:21:42.583544 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:21:42.583549 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:21:42.583555 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:21:42.583560 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:21:42.583567 | orchestrator | 2026-04-09 02:21:42.583584 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-09 02:21:42.583599 | orchestrator | Thursday 09 April 2026 02:21:27 +0000 (0:00:00.975) 0:00:07.154 ******** 2026-04-09 02:21:42.583609 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:21:42.583629 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:21:42.583638 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:21:42.583649 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:21:42.583658 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:21:42.583668 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:21:42.583675 | orchestrator | 2026-04-09 02:21:42.583682 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-09 02:21:42.583688 | orchestrator | Thursday 09 April 2026 02:21:28 +0000 (0:00:00.700) 0:00:07.855 ******** 2026-04-09 02:21:42.583693 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:21:42.583698 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:21:42.583704 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:21:42.583709 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:21:42.583714 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:21:42.583720 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:21:42.583725 | orchestrator | 2026-04-09 02:21:42.583730 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-09 02:21:42.583736 | orchestrator | Thursday 09 April 2026 02:21:29 +0000 (0:00:00.956) 0:00:08.812 ******** 2026-04-09 02:21:42.583741 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 02:21:42.583747 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 02:21:42.583752 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:21:42.583761 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 02:21:42.583774 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 02:21:42.583785 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:21:42.583794 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 02:21:42.583802 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 02:21:42.583810 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:21:42.583818 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 02:21:42.583844 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 02:21:42.583855 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:21:42.583865 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 02:21:42.583874 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 02:21:42.583883 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:21:42.583892 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 02:21:42.583903 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 02:21:42.583908 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:21:42.583913 | orchestrator | 2026-04-09 02:21:42.583919 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-09 02:21:42.583924 | orchestrator | Thursday 09 April 2026 02:21:30 +0000 (0:00:00.654) 0:00:09.467 ******** 2026-04-09 02:21:42.583933 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:21:42.583941 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:21:42.583954 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:21:42.583974 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:21:42.584005 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:21:42.584014 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:21:42.584023 | orchestrator | 2026-04-09 02:21:42.584030 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-09 02:21:42.584040 | orchestrator | Thursday 09 April 2026 02:21:31 +0000 (0:00:01.303) 0:00:10.770 ******** 2026-04-09 02:21:42.584048 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:21:42.584058 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:21:42.584066 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:21:42.584074 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:21:42.584082 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:21:42.584091 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:21:42.584112 | orchestrator | 2026-04-09 02:21:42.584120 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-09 02:21:42.584128 | orchestrator | Thursday 09 April 2026 02:21:32 +0000 (0:00:01.058) 0:00:11.828 ******** 2026-04-09 02:21:42.584143 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:21:42.584151 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:21:42.584158 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:21:42.584166 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:21:42.584174 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:21:42.584181 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:21:42.584189 | orchestrator | 2026-04-09 02:21:42.584198 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-09 02:21:42.584207 | orchestrator | Thursday 09 April 2026 02:21:38 +0000 (0:00:05.663) 0:00:17.492 ******** 2026-04-09 02:21:42.584215 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:21:42.584232 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:21:42.584242 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:21:42.584251 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:21:42.584260 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:21:42.584270 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:21:42.584279 | orchestrator | 2026-04-09 02:21:42.584287 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-09 02:21:42.584296 | orchestrator | Thursday 09 April 2026 02:21:39 +0000 (0:00:01.075) 0:00:18.568 ******** 2026-04-09 02:21:42.584304 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:21:42.584314 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:21:42.584323 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:21:42.584332 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:21:42.584340 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:21:42.584349 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:21:42.584358 | orchestrator | 2026-04-09 02:21:42.584367 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-09 02:21:42.584378 | orchestrator | Thursday 09 April 2026 02:21:40 +0000 (0:00:01.499) 0:00:20.067 ******** 2026-04-09 02:21:42.584387 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:21:42.584397 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:21:42.584406 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:21:42.584415 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:21:42.584423 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:21:42.584431 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:21:42.584439 | orchestrator | 2026-04-09 02:21:42.584448 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-09 02:21:42.584456 | orchestrator | Thursday 09 April 2026 02:21:41 +0000 (0:00:00.702) 0:00:20.770 ******** 2026-04-09 02:21:42.584465 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-09 02:21:42.584479 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-09 02:21:42.584488 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:21:42.584495 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-09 02:21:42.584512 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-09 02:21:42.584520 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:21:42.584528 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-09 02:21:42.584537 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-09 02:21:42.584545 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:21:42.584553 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-09 02:21:42.584562 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-09 02:21:42.584571 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:21:42.584579 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-09 02:21:42.584588 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-09 02:21:42.584596 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:21:42.584604 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-09 02:21:42.584613 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-09 02:21:42.584622 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:21:42.584631 | orchestrator | 2026-04-09 02:21:42.584639 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-09 02:21:42.584661 | orchestrator | Thursday 09 April 2026 02:21:42 +0000 (0:00:00.996) 0:00:21.767 ******** 2026-04-09 02:23:00.400256 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:23:00.400368 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:23:00.400382 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:23:00.400393 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:23:00.400403 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:23:00.400413 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:23:00.400423 | orchestrator | 2026-04-09 02:23:00.400435 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-09 02:23:00.400446 | orchestrator | Thursday 09 April 2026 02:21:43 +0000 (0:00:00.838) 0:00:22.605 ******** 2026-04-09 02:23:00.400456 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:23:00.400466 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:23:00.400476 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:23:00.400485 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:23:00.400495 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:23:00.400504 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:23:00.400514 | orchestrator | 2026-04-09 02:23:00.400524 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-09 02:23:00.400533 | orchestrator | 2026-04-09 02:23:00.400543 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-09 02:23:00.400553 | orchestrator | Thursday 09 April 2026 02:21:44 +0000 (0:00:01.330) 0:00:23.936 ******** 2026-04-09 02:23:00.400563 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:23:00.400574 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:23:00.400583 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:23:00.400593 | orchestrator | 2026-04-09 02:23:00.400602 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-09 02:23:00.400612 | orchestrator | Thursday 09 April 2026 02:21:46 +0000 (0:00:01.691) 0:00:25.628 ******** 2026-04-09 02:23:00.400622 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:23:00.400631 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:23:00.400641 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:23:00.400651 | orchestrator | 2026-04-09 02:23:00.400660 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-09 02:23:00.400670 | orchestrator | Thursday 09 April 2026 02:21:47 +0000 (0:00:01.502) 0:00:27.130 ******** 2026-04-09 02:23:00.400680 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:23:00.400689 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:23:00.400698 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:23:00.400708 | orchestrator | 2026-04-09 02:23:00.400718 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-09 02:23:00.400753 | orchestrator | Thursday 09 April 2026 02:21:48 +0000 (0:00:00.951) 0:00:28.082 ******** 2026-04-09 02:23:00.400763 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:23:00.400772 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:23:00.400782 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:23:00.400791 | orchestrator | 2026-04-09 02:23:00.400844 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-09 02:23:00.400854 | orchestrator | Thursday 09 April 2026 02:21:49 +0000 (0:00:00.817) 0:00:28.900 ******** 2026-04-09 02:23:00.400864 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:23:00.400874 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:23:00.400883 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:23:00.400893 | orchestrator | 2026-04-09 02:23:00.400902 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-09 02:23:00.400928 | orchestrator | Thursday 09 April 2026 02:21:50 +0000 (0:00:00.414) 0:00:29.314 ******** 2026-04-09 02:23:00.400938 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:23:00.400948 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:23:00.400957 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:23:00.400967 | orchestrator | 2026-04-09 02:23:00.400976 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-09 02:23:00.400986 | orchestrator | Thursday 09 April 2026 02:21:51 +0000 (0:00:01.099) 0:00:30.414 ******** 2026-04-09 02:23:00.400995 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:23:00.401005 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:23:00.401015 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:23:00.401024 | orchestrator | 2026-04-09 02:23:00.401034 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-09 02:23:00.401043 | orchestrator | Thursday 09 April 2026 02:21:52 +0000 (0:00:01.576) 0:00:31.991 ******** 2026-04-09 02:23:00.401053 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:23:00.401062 | orchestrator | 2026-04-09 02:23:00.401072 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-09 02:23:00.401081 | orchestrator | Thursday 09 April 2026 02:21:53 +0000 (0:00:00.567) 0:00:32.558 ******** 2026-04-09 02:23:00.401091 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:23:00.401100 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:23:00.401110 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:23:00.401119 | orchestrator | 2026-04-09 02:23:00.401129 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-09 02:23:00.401138 | orchestrator | Thursday 09 April 2026 02:21:55 +0000 (0:00:02.369) 0:00:34.927 ******** 2026-04-09 02:23:00.401148 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:23:00.401157 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:23:00.401167 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:23:00.401176 | orchestrator | 2026-04-09 02:23:00.401186 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-09 02:23:00.401195 | orchestrator | Thursday 09 April 2026 02:21:56 +0000 (0:00:00.886) 0:00:35.814 ******** 2026-04-09 02:23:00.401205 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:23:00.401214 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:23:00.401224 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:23:00.401233 | orchestrator | 2026-04-09 02:23:00.401243 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-09 02:23:00.401252 | orchestrator | Thursday 09 April 2026 02:21:57 +0000 (0:00:01.000) 0:00:36.815 ******** 2026-04-09 02:23:00.401262 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:23:00.401271 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:23:00.401281 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:23:00.401290 | orchestrator | 2026-04-09 02:23:00.401300 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-09 02:23:00.401327 | orchestrator | Thursday 09 April 2026 02:21:59 +0000 (0:00:01.602) 0:00:38.417 ******** 2026-04-09 02:23:00.401337 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:23:00.401354 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:23:00.401364 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:23:00.401373 | orchestrator | 2026-04-09 02:23:00.401383 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-09 02:23:00.401392 | orchestrator | Thursday 09 April 2026 02:21:59 +0000 (0:00:00.650) 0:00:39.068 ******** 2026-04-09 02:23:00.401402 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:23:00.401411 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:23:00.401420 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:23:00.401430 | orchestrator | 2026-04-09 02:23:00.401439 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-09 02:23:00.401449 | orchestrator | Thursday 09 April 2026 02:22:00 +0000 (0:00:00.315) 0:00:39.383 ******** 2026-04-09 02:23:00.401459 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:23:00.401472 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:23:00.401489 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:23:00.401507 | orchestrator | 2026-04-09 02:23:00.401524 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-09 02:23:00.401534 | orchestrator | Thursday 09 April 2026 02:22:01 +0000 (0:00:01.264) 0:00:40.647 ******** 2026-04-09 02:23:00.401543 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:23:00.401553 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:23:00.401562 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:23:00.401571 | orchestrator | 2026-04-09 02:23:00.401581 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-09 02:23:00.401590 | orchestrator | Thursday 09 April 2026 02:22:04 +0000 (0:00:03.108) 0:00:43.756 ******** 2026-04-09 02:23:00.401599 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:23:00.401609 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:23:00.401618 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:23:00.401632 | orchestrator | 2026-04-09 02:23:00.401642 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-09 02:23:00.401652 | orchestrator | Thursday 09 April 2026 02:22:04 +0000 (0:00:00.364) 0:00:44.120 ******** 2026-04-09 02:23:00.401662 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-09 02:23:00.401674 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-09 02:23:00.401684 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-09 02:23:00.401694 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-09 02:23:00.401703 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-09 02:23:00.401713 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-09 02:23:00.401722 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-09 02:23:00.401732 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-09 02:23:00.401741 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-09 02:23:00.401751 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-09 02:23:00.401760 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-09 02:23:00.401776 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-09 02:23:00.401786 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-09 02:23:00.401795 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-09 02:23:00.401825 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-09 02:23:00.401836 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:23:00.401845 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:23:00.401855 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:23:00.401864 | orchestrator | 2026-04-09 02:23:00.401879 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-09 02:23:00.401889 | orchestrator | Thursday 09 April 2026 02:22:58 +0000 (0:00:53.932) 0:01:38.053 ******** 2026-04-09 02:23:00.401898 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:23:00.401908 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:23:00.401917 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:23:00.401927 | orchestrator | 2026-04-09 02:23:00.401936 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-09 02:23:00.401946 | orchestrator | Thursday 09 April 2026 02:22:59 +0000 (0:00:00.360) 0:01:38.413 ******** 2026-04-09 02:23:00.401961 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:23:42.695371 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:23:42.695508 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:23:42.695530 | orchestrator | 2026-04-09 02:23:42.695550 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-09 02:23:42.695570 | orchestrator | Thursday 09 April 2026 02:23:00 +0000 (0:00:01.168) 0:01:39.582 ******** 2026-04-09 02:23:42.695587 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:23:42.695604 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:23:42.695620 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:23:42.695636 | orchestrator | 2026-04-09 02:23:42.695652 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-09 02:23:42.695667 | orchestrator | Thursday 09 April 2026 02:23:01 +0000 (0:00:01.264) 0:01:40.846 ******** 2026-04-09 02:23:42.695683 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:23:42.695699 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:23:42.695746 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:23:42.695764 | orchestrator | 2026-04-09 02:23:42.695781 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-09 02:23:42.695799 | orchestrator | Thursday 09 April 2026 02:23:28 +0000 (0:00:26.369) 0:02:07.215 ******** 2026-04-09 02:23:42.695815 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:23:42.695833 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:23:42.695850 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:23:42.695865 | orchestrator | 2026-04-09 02:23:42.695880 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-09 02:23:42.695897 | orchestrator | Thursday 09 April 2026 02:23:28 +0000 (0:00:00.612) 0:02:07.827 ******** 2026-04-09 02:23:42.695915 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:23:42.695932 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:23:42.695949 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:23:42.695968 | orchestrator | 2026-04-09 02:23:42.695985 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-09 02:23:42.696003 | orchestrator | Thursday 09 April 2026 02:23:29 +0000 (0:00:00.641) 0:02:08.468 ******** 2026-04-09 02:23:42.696019 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:23:42.696036 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:23:42.696054 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:23:42.696071 | orchestrator | 2026-04-09 02:23:42.696089 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-09 02:23:42.696142 | orchestrator | Thursday 09 April 2026 02:23:29 +0000 (0:00:00.675) 0:02:09.144 ******** 2026-04-09 02:23:42.696176 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:23:42.696194 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:23:42.696211 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:23:42.696228 | orchestrator | 2026-04-09 02:23:42.696246 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-09 02:23:42.696264 | orchestrator | Thursday 09 April 2026 02:23:30 +0000 (0:00:00.867) 0:02:10.011 ******** 2026-04-09 02:23:42.696281 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:23:42.696298 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:23:42.696316 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:23:42.696346 | orchestrator | 2026-04-09 02:23:42.696365 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-09 02:23:42.696382 | orchestrator | Thursday 09 April 2026 02:23:31 +0000 (0:00:00.324) 0:02:10.336 ******** 2026-04-09 02:23:42.696400 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:23:42.696417 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:23:42.696435 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:23:42.696452 | orchestrator | 2026-04-09 02:23:42.696470 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-09 02:23:42.696487 | orchestrator | Thursday 09 April 2026 02:23:31 +0000 (0:00:00.662) 0:02:10.998 ******** 2026-04-09 02:23:42.696505 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:23:42.696522 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:23:42.696540 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:23:42.696558 | orchestrator | 2026-04-09 02:23:42.696576 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-09 02:23:42.696593 | orchestrator | Thursday 09 April 2026 02:23:32 +0000 (0:00:00.654) 0:02:11.653 ******** 2026-04-09 02:23:42.696611 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:23:42.696629 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:23:42.696646 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:23:42.696663 | orchestrator | 2026-04-09 02:23:42.696682 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-09 02:23:42.696700 | orchestrator | Thursday 09 April 2026 02:23:33 +0000 (0:00:00.937) 0:02:12.591 ******** 2026-04-09 02:23:42.696748 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:23:42.696768 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:23:42.696787 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:23:42.696806 | orchestrator | 2026-04-09 02:23:42.696824 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-09 02:23:42.696844 | orchestrator | Thursday 09 April 2026 02:23:34 +0000 (0:00:01.044) 0:02:13.636 ******** 2026-04-09 02:23:42.696863 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:23:42.696882 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:23:42.696902 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:23:42.696921 | orchestrator | 2026-04-09 02:23:42.696940 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-09 02:23:42.696959 | orchestrator | Thursday 09 April 2026 02:23:34 +0000 (0:00:00.327) 0:02:13.964 ******** 2026-04-09 02:23:42.696979 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:23:42.696997 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:23:42.697016 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:23:42.697035 | orchestrator | 2026-04-09 02:23:42.697055 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-09 02:23:42.697074 | orchestrator | Thursday 09 April 2026 02:23:35 +0000 (0:00:00.314) 0:02:14.278 ******** 2026-04-09 02:23:42.697094 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:23:42.697113 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:23:42.697132 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:23:42.697151 | orchestrator | 2026-04-09 02:23:42.697171 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-09 02:23:42.697190 | orchestrator | Thursday 09 April 2026 02:23:35 +0000 (0:00:00.629) 0:02:14.908 ******** 2026-04-09 02:23:42.697225 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:23:42.697245 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:23:42.697291 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:23:42.697311 | orchestrator | 2026-04-09 02:23:42.697332 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-09 02:23:42.697353 | orchestrator | Thursday 09 April 2026 02:23:36 +0000 (0:00:00.878) 0:02:15.786 ******** 2026-04-09 02:23:42.697372 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-09 02:23:42.697392 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-09 02:23:42.697412 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-09 02:23:42.697431 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-09 02:23:42.697451 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-09 02:23:42.697470 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-09 02:23:42.697489 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-09 02:23:42.697509 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-09 02:23:42.697525 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-09 02:23:42.697542 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-09 02:23:42.697560 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-09 02:23:42.697577 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-09 02:23:42.697595 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-09 02:23:42.697612 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-09 02:23:42.697629 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-09 02:23:42.697645 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-09 02:23:42.697662 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-09 02:23:42.697679 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-09 02:23:42.697696 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-09 02:23:42.697736 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-09 02:23:42.697754 | orchestrator | 2026-04-09 02:23:42.697769 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-09 02:23:42.697785 | orchestrator | 2026-04-09 02:23:42.697801 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-09 02:23:42.697818 | orchestrator | Thursday 09 April 2026 02:23:39 +0000 (0:00:02.995) 0:02:18.782 ******** 2026-04-09 02:23:42.697831 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:23:42.697844 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:23:42.697856 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:23:42.697869 | orchestrator | 2026-04-09 02:23:42.697902 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-09 02:23:42.698004 | orchestrator | Thursday 09 April 2026 02:23:39 +0000 (0:00:00.344) 0:02:19.127 ******** 2026-04-09 02:23:42.698087 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:23:42.698104 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:23:42.698117 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:23:42.698144 | orchestrator | 2026-04-09 02:23:42.698158 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-09 02:23:42.698171 | orchestrator | Thursday 09 April 2026 02:23:40 +0000 (0:00:00.879) 0:02:20.006 ******** 2026-04-09 02:23:42.698184 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:23:42.698198 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:23:42.698211 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:23:42.698224 | orchestrator | 2026-04-09 02:23:42.698238 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-09 02:23:42.698251 | orchestrator | Thursday 09 April 2026 02:23:41 +0000 (0:00:00.360) 0:02:20.366 ******** 2026-04-09 02:23:42.698265 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 02:23:42.698280 | orchestrator | 2026-04-09 02:23:42.698294 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-09 02:23:42.698308 | orchestrator | Thursday 09 April 2026 02:23:41 +0000 (0:00:00.498) 0:02:20.865 ******** 2026-04-09 02:23:42.698322 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:23:42.698337 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:23:42.698351 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:23:42.698364 | orchestrator | 2026-04-09 02:23:42.698377 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-09 02:23:42.698391 | orchestrator | Thursday 09 April 2026 02:23:42 +0000 (0:00:00.503) 0:02:21.368 ******** 2026-04-09 02:23:42.698404 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:23:42.698417 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:23:42.698432 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:23:42.698445 | orchestrator | 2026-04-09 02:23:42.698458 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-09 02:23:42.698473 | orchestrator | Thursday 09 April 2026 02:23:42 +0000 (0:00:00.312) 0:02:21.681 ******** 2026-04-09 02:23:42.698506 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:25:23.911460 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:25:23.911540 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:25:23.911546 | orchestrator | 2026-04-09 02:25:23.911552 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-09 02:25:23.911557 | orchestrator | Thursday 09 April 2026 02:23:42 +0000 (0:00:00.335) 0:02:22.017 ******** 2026-04-09 02:25:23.911561 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:25:23.911565 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:25:23.911569 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:25:23.911573 | orchestrator | 2026-04-09 02:25:23.911577 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-09 02:25:23.911581 | orchestrator | Thursday 09 April 2026 02:23:43 +0000 (0:00:00.621) 0:02:22.639 ******** 2026-04-09 02:25:23.911585 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:25:23.911589 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:25:23.911649 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:25:23.911659 | orchestrator | 2026-04-09 02:25:23.911663 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-09 02:25:23.911667 | orchestrator | Thursday 09 April 2026 02:23:44 +0000 (0:00:01.339) 0:02:23.978 ******** 2026-04-09 02:25:23.911671 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:25:23.911674 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:25:23.911678 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:25:23.911682 | orchestrator | 2026-04-09 02:25:23.911686 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-09 02:25:23.911690 | orchestrator | Thursday 09 April 2026 02:23:46 +0000 (0:00:01.250) 0:02:25.229 ******** 2026-04-09 02:25:23.911694 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:25:23.911698 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:25:23.911701 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:25:23.911705 | orchestrator | 2026-04-09 02:25:23.911709 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-09 02:25:23.911729 | orchestrator | 2026-04-09 02:25:23.911736 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-09 02:25:23.911742 | orchestrator | Thursday 09 April 2026 02:23:56 +0000 (0:00:10.038) 0:02:35.268 ******** 2026-04-09 02:25:23.911751 | orchestrator | ok: [testbed-manager] 2026-04-09 02:25:23.911758 | orchestrator | 2026-04-09 02:25:23.911766 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-09 02:25:23.911771 | orchestrator | Thursday 09 April 2026 02:23:56 +0000 (0:00:00.837) 0:02:36.105 ******** 2026-04-09 02:25:23.911777 | orchestrator | changed: [testbed-manager] 2026-04-09 02:25:23.911782 | orchestrator | 2026-04-09 02:25:23.911788 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-09 02:25:23.911793 | orchestrator | Thursday 09 April 2026 02:23:57 +0000 (0:00:00.689) 0:02:36.794 ******** 2026-04-09 02:25:23.911800 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-09 02:25:23.911806 | orchestrator | 2026-04-09 02:25:23.911812 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-09 02:25:23.911820 | orchestrator | Thursday 09 April 2026 02:23:58 +0000 (0:00:00.597) 0:02:37.392 ******** 2026-04-09 02:25:23.911826 | orchestrator | changed: [testbed-manager] 2026-04-09 02:25:23.911833 | orchestrator | 2026-04-09 02:25:23.911838 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-09 02:25:23.911843 | orchestrator | Thursday 09 April 2026 02:23:59 +0000 (0:00:00.973) 0:02:38.365 ******** 2026-04-09 02:25:23.911849 | orchestrator | changed: [testbed-manager] 2026-04-09 02:25:23.911856 | orchestrator | 2026-04-09 02:25:23.911863 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-09 02:25:23.911869 | orchestrator | Thursday 09 April 2026 02:23:59 +0000 (0:00:00.670) 0:02:39.036 ******** 2026-04-09 02:25:23.911874 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-09 02:25:23.911878 | orchestrator | 2026-04-09 02:25:23.911882 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-09 02:25:23.911886 | orchestrator | Thursday 09 April 2026 02:24:01 +0000 (0:00:01.732) 0:02:40.768 ******** 2026-04-09 02:25:23.911890 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-09 02:25:23.911893 | orchestrator | 2026-04-09 02:25:23.911917 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-09 02:25:23.911924 | orchestrator | Thursday 09 April 2026 02:24:02 +0000 (0:00:00.886) 0:02:41.654 ******** 2026-04-09 02:25:23.911948 | orchestrator | changed: [testbed-manager] 2026-04-09 02:25:23.911953 | orchestrator | 2026-04-09 02:25:23.911956 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-09 02:25:23.911960 | orchestrator | Thursday 09 April 2026 02:24:02 +0000 (0:00:00.464) 0:02:42.118 ******** 2026-04-09 02:25:23.911964 | orchestrator | changed: [testbed-manager] 2026-04-09 02:25:23.911968 | orchestrator | 2026-04-09 02:25:23.911971 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-09 02:25:23.911975 | orchestrator | 2026-04-09 02:25:23.911979 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-09 02:25:23.911983 | orchestrator | Thursday 09 April 2026 02:24:03 +0000 (0:00:00.466) 0:02:42.585 ******** 2026-04-09 02:25:23.911987 | orchestrator | ok: [testbed-manager] 2026-04-09 02:25:23.911991 | orchestrator | 2026-04-09 02:25:23.911994 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-09 02:25:23.911998 | orchestrator | Thursday 09 April 2026 02:24:03 +0000 (0:00:00.151) 0:02:42.737 ******** 2026-04-09 02:25:23.912002 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 02:25:23.912006 | orchestrator | 2026-04-09 02:25:23.912010 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-09 02:25:23.912014 | orchestrator | Thursday 09 April 2026 02:24:03 +0000 (0:00:00.441) 0:02:43.179 ******** 2026-04-09 02:25:23.912017 | orchestrator | ok: [testbed-manager] 2026-04-09 02:25:23.912021 | orchestrator | 2026-04-09 02:25:23.912030 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-09 02:25:23.912034 | orchestrator | Thursday 09 April 2026 02:24:04 +0000 (0:00:00.893) 0:02:44.072 ******** 2026-04-09 02:25:23.912038 | orchestrator | ok: [testbed-manager] 2026-04-09 02:25:23.912042 | orchestrator | 2026-04-09 02:25:23.912057 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-09 02:25:23.912061 | orchestrator | Thursday 09 April 2026 02:24:06 +0000 (0:00:01.832) 0:02:45.905 ******** 2026-04-09 02:25:23.912064 | orchestrator | changed: [testbed-manager] 2026-04-09 02:25:23.912068 | orchestrator | 2026-04-09 02:25:23.912072 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-09 02:25:23.912075 | orchestrator | Thursday 09 April 2026 02:24:07 +0000 (0:00:00.842) 0:02:46.748 ******** 2026-04-09 02:25:23.912079 | orchestrator | ok: [testbed-manager] 2026-04-09 02:25:23.912083 | orchestrator | 2026-04-09 02:25:23.912086 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-09 02:25:23.912090 | orchestrator | Thursday 09 April 2026 02:24:08 +0000 (0:00:00.477) 0:02:47.225 ******** 2026-04-09 02:25:23.912094 | orchestrator | changed: [testbed-manager] 2026-04-09 02:25:23.912097 | orchestrator | 2026-04-09 02:25:23.912101 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-09 02:25:23.912105 | orchestrator | Thursday 09 April 2026 02:24:16 +0000 (0:00:08.380) 0:02:55.606 ******** 2026-04-09 02:25:23.912109 | orchestrator | changed: [testbed-manager] 2026-04-09 02:25:23.912112 | orchestrator | 2026-04-09 02:25:23.912116 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-09 02:25:23.912120 | orchestrator | Thursday 09 April 2026 02:24:29 +0000 (0:00:13.300) 0:03:08.906 ******** 2026-04-09 02:25:23.912124 | orchestrator | ok: [testbed-manager] 2026-04-09 02:25:23.912127 | orchestrator | 2026-04-09 02:25:23.912131 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-09 02:25:23.912135 | orchestrator | 2026-04-09 02:25:23.912139 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-09 02:25:23.912142 | orchestrator | Thursday 09 April 2026 02:24:30 +0000 (0:00:00.827) 0:03:09.734 ******** 2026-04-09 02:25:23.912146 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:25:23.912150 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:25:23.912154 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:25:23.912157 | orchestrator | 2026-04-09 02:25:23.912161 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-09 02:25:23.912165 | orchestrator | Thursday 09 April 2026 02:24:30 +0000 (0:00:00.342) 0:03:10.077 ******** 2026-04-09 02:25:23.912169 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:25:23.912172 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:25:23.912176 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:25:23.912180 | orchestrator | 2026-04-09 02:25:23.912184 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-09 02:25:23.912188 | orchestrator | Thursday 09 April 2026 02:24:31 +0000 (0:00:00.336) 0:03:10.414 ******** 2026-04-09 02:25:23.912191 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:25:23.912195 | orchestrator | 2026-04-09 02:25:23.912199 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-09 02:25:23.912203 | orchestrator | Thursday 09 April 2026 02:24:31 +0000 (0:00:00.737) 0:03:11.152 ******** 2026-04-09 02:25:23.912207 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-09 02:25:23.912211 | orchestrator | 2026-04-09 02:25:23.912214 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-09 02:25:23.912218 | orchestrator | Thursday 09 April 2026 02:24:32 +0000 (0:00:00.877) 0:03:12.029 ******** 2026-04-09 02:25:23.912222 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 02:25:23.912225 | orchestrator | 2026-04-09 02:25:23.912229 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-09 02:25:23.912237 | orchestrator | Thursday 09 April 2026 02:24:33 +0000 (0:00:00.914) 0:03:12.943 ******** 2026-04-09 02:25:23.912240 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:25:23.912244 | orchestrator | 2026-04-09 02:25:23.912248 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-09 02:25:23.912251 | orchestrator | Thursday 09 April 2026 02:24:33 +0000 (0:00:00.179) 0:03:13.122 ******** 2026-04-09 02:25:23.912255 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 02:25:23.912259 | orchestrator | 2026-04-09 02:25:23.912263 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-09 02:25:23.912266 | orchestrator | Thursday 09 April 2026 02:24:34 +0000 (0:00:01.051) 0:03:14.174 ******** 2026-04-09 02:25:23.912270 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:25:23.912274 | orchestrator | 2026-04-09 02:25:23.912277 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-09 02:25:23.912281 | orchestrator | Thursday 09 April 2026 02:24:35 +0000 (0:00:00.158) 0:03:14.333 ******** 2026-04-09 02:25:23.912285 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:25:23.912288 | orchestrator | 2026-04-09 02:25:23.912292 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-09 02:25:23.912296 | orchestrator | Thursday 09 April 2026 02:24:35 +0000 (0:00:00.135) 0:03:14.468 ******** 2026-04-09 02:25:23.912299 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:25:23.912303 | orchestrator | 2026-04-09 02:25:23.912307 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-09 02:25:23.912314 | orchestrator | Thursday 09 April 2026 02:24:35 +0000 (0:00:00.141) 0:03:14.609 ******** 2026-04-09 02:25:23.912318 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:25:23.912322 | orchestrator | 2026-04-09 02:25:23.912325 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-09 02:25:23.912329 | orchestrator | Thursday 09 April 2026 02:24:35 +0000 (0:00:00.112) 0:03:14.722 ******** 2026-04-09 02:25:23.912333 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-09 02:25:23.912336 | orchestrator | 2026-04-09 02:25:23.912340 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-09 02:25:23.912344 | orchestrator | Thursday 09 April 2026 02:24:41 +0000 (0:00:05.993) 0:03:20.716 ******** 2026-04-09 02:25:23.912347 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-09 02:25:23.912351 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-04-09 02:25:23.912359 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-09 02:25:48.869086 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-09 02:25:48.869196 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-09 02:25:48.869211 | orchestrator | 2026-04-09 02:25:48.869222 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-09 02:25:48.869233 | orchestrator | Thursday 09 April 2026 02:25:23 +0000 (0:00:42.384) 0:04:03.101 ******** 2026-04-09 02:25:48.869243 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 02:25:48.869253 | orchestrator | 2026-04-09 02:25:48.869263 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-09 02:25:48.869273 | orchestrator | Thursday 09 April 2026 02:25:25 +0000 (0:00:01.344) 0:04:04.445 ******** 2026-04-09 02:25:48.869283 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-09 02:25:48.869293 | orchestrator | 2026-04-09 02:25:48.869303 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-09 02:25:48.869312 | orchestrator | Thursday 09 April 2026 02:25:26 +0000 (0:00:01.673) 0:04:06.119 ******** 2026-04-09 02:25:48.869322 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-09 02:25:48.869331 | orchestrator | 2026-04-09 02:25:48.869341 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-09 02:25:48.869351 | orchestrator | Thursday 09 April 2026 02:25:28 +0000 (0:00:01.405) 0:04:07.525 ******** 2026-04-09 02:25:48.869385 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:25:48.869395 | orchestrator | 2026-04-09 02:25:48.869405 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-09 02:25:48.869414 | orchestrator | Thursday 09 April 2026 02:25:28 +0000 (0:00:00.166) 0:04:07.691 ******** 2026-04-09 02:25:48.869424 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-09 02:25:48.869434 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-09 02:25:48.869443 | orchestrator | 2026-04-09 02:25:48.869453 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-09 02:25:48.869462 | orchestrator | Thursday 09 April 2026 02:25:30 +0000 (0:00:02.013) 0:04:09.704 ******** 2026-04-09 02:25:48.869472 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:25:48.869481 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:25:48.869491 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:25:48.869500 | orchestrator | 2026-04-09 02:25:48.869510 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-09 02:25:48.869519 | orchestrator | Thursday 09 April 2026 02:25:30 +0000 (0:00:00.331) 0:04:10.036 ******** 2026-04-09 02:25:48.869529 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:25:48.869538 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:25:48.869548 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:25:48.869557 | orchestrator | 2026-04-09 02:25:48.869566 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-09 02:25:48.869576 | orchestrator | 2026-04-09 02:25:48.869613 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-09 02:25:48.869623 | orchestrator | Thursday 09 April 2026 02:25:31 +0000 (0:00:00.868) 0:04:10.905 ******** 2026-04-09 02:25:48.869633 | orchestrator | ok: [testbed-manager] 2026-04-09 02:25:48.869642 | orchestrator | 2026-04-09 02:25:48.869652 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-09 02:25:48.869662 | orchestrator | Thursday 09 April 2026 02:25:32 +0000 (0:00:00.400) 0:04:11.305 ******** 2026-04-09 02:25:48.869671 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 02:25:48.869681 | orchestrator | 2026-04-09 02:25:48.869690 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-09 02:25:48.869700 | orchestrator | Thursday 09 April 2026 02:25:32 +0000 (0:00:00.239) 0:04:11.545 ******** 2026-04-09 02:25:48.869709 | orchestrator | changed: [testbed-manager] 2026-04-09 02:25:48.869718 | orchestrator | 2026-04-09 02:25:48.869728 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-09 02:25:48.869737 | orchestrator | 2026-04-09 02:25:48.869747 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-09 02:25:48.869756 | orchestrator | Thursday 09 April 2026 02:25:38 +0000 (0:00:05.873) 0:04:17.419 ******** 2026-04-09 02:25:48.869766 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:25:48.869775 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:25:48.869785 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:25:48.869794 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:25:48.869804 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:25:48.869813 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:25:48.869822 | orchestrator | 2026-04-09 02:25:48.869832 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-09 02:25:48.869841 | orchestrator | Thursday 09 April 2026 02:25:38 +0000 (0:00:00.651) 0:04:18.071 ******** 2026-04-09 02:25:48.869850 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-09 02:25:48.869860 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-09 02:25:48.869869 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-09 02:25:48.869878 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-09 02:25:48.869896 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-09 02:25:48.869905 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-09 02:25:48.869914 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-09 02:25:48.869924 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-09 02:25:48.869933 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-09 02:25:48.869958 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-09 02:25:48.869968 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-09 02:25:48.869978 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-09 02:25:48.869987 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-09 02:25:48.869997 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-09 02:25:48.870006 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-09 02:25:48.870086 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-09 02:25:48.870099 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-09 02:25:48.870109 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-09 02:25:48.870119 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-09 02:25:48.870128 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-09 02:25:48.870137 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-09 02:25:48.870147 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-09 02:25:48.870156 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-09 02:25:48.870166 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-09 02:25:48.870175 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-09 02:25:48.870184 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-09 02:25:48.870193 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-09 02:25:48.870203 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-09 02:25:48.870212 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-09 02:25:48.870221 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-09 02:25:48.870230 | orchestrator | 2026-04-09 02:25:48.870240 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-09 02:25:48.870249 | orchestrator | Thursday 09 April 2026 02:25:47 +0000 (0:00:08.689) 0:04:26.760 ******** 2026-04-09 02:25:48.870259 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:25:48.870268 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:25:48.870277 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:25:48.870287 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:25:48.870296 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:25:48.870305 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:25:48.870314 | orchestrator | 2026-04-09 02:25:48.870324 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-09 02:25:48.870333 | orchestrator | Thursday 09 April 2026 02:25:48 +0000 (0:00:00.582) 0:04:27.342 ******** 2026-04-09 02:25:48.870343 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:25:48.870376 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:25:48.870386 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:25:48.870395 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:25:48.870405 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:25:48.870414 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:25:48.870423 | orchestrator | 2026-04-09 02:25:48.870432 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:25:48.870442 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:25:48.870454 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-09 02:25:48.870464 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-09 02:25:48.870473 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-09 02:25:48.870483 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 02:25:48.870492 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 02:25:48.870502 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 02:25:48.870511 | orchestrator | 2026-04-09 02:25:48.870521 | orchestrator | 2026-04-09 02:25:48.870530 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:25:48.870540 | orchestrator | Thursday 09 April 2026 02:25:48 +0000 (0:00:00.709) 0:04:28.051 ******** 2026-04-09 02:25:48.870556 | orchestrator | =============================================================================== 2026-04-09 02:25:49.296743 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.93s 2026-04-09 02:25:49.296834 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.38s 2026-04-09 02:25:49.296845 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.37s 2026-04-09 02:25:49.296852 | orchestrator | kubectl : Install required packages ------------------------------------ 13.30s 2026-04-09 02:25:49.296858 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.04s 2026-04-09 02:25:49.296864 | orchestrator | Manage labels ----------------------------------------------------------- 8.69s 2026-04-09 02:25:49.296870 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.38s 2026-04-09 02:25:49.296876 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.99s 2026-04-09 02:25:49.296882 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.87s 2026-04-09 02:25:49.296888 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.66s 2026-04-09 02:25:49.296894 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.11s 2026-04-09 02:25:49.296901 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.00s 2026-04-09 02:25:49.296909 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.37s 2026-04-09 02:25:49.296915 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.01s 2026-04-09 02:25:49.296922 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 1.93s 2026-04-09 02:25:49.296928 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.83s 2026-04-09 02:25:49.296934 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.73s 2026-04-09 02:25:49.296965 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.69s 2026-04-09 02:25:49.296973 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.67s 2026-04-09 02:25:49.296979 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.60s 2026-04-09 02:25:49.666337 | orchestrator | + osism apply copy-kubeconfig 2026-04-09 02:26:01.938186 | orchestrator | 2026-04-09 02:26:01 | INFO  | Task 074c0307-6bbb-42eb-aaaa-cafeeba7f67d (copy-kubeconfig) was prepared for execution. 2026-04-09 02:26:01.938305 | orchestrator | 2026-04-09 02:26:01 | INFO  | It takes a moment until task 074c0307-6bbb-42eb-aaaa-cafeeba7f67d (copy-kubeconfig) has been started and output is visible here. 2026-04-09 02:26:09.337388 | orchestrator | 2026-04-09 02:26:09.337473 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-09 02:26:09.337481 | orchestrator | 2026-04-09 02:26:09.337487 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-09 02:26:09.337493 | orchestrator | Thursday 09 April 2026 02:26:06 +0000 (0:00:00.186) 0:00:00.186 ******** 2026-04-09 02:26:09.337499 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-09 02:26:09.337504 | orchestrator | 2026-04-09 02:26:09.337509 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-09 02:26:09.337529 | orchestrator | Thursday 09 April 2026 02:26:07 +0000 (0:00:00.742) 0:00:00.929 ******** 2026-04-09 02:26:09.337535 | orchestrator | changed: [testbed-manager] 2026-04-09 02:26:09.337541 | orchestrator | 2026-04-09 02:26:09.337546 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-09 02:26:09.337552 | orchestrator | Thursday 09 April 2026 02:26:08 +0000 (0:00:01.310) 0:00:02.239 ******** 2026-04-09 02:26:09.337560 | orchestrator | changed: [testbed-manager] 2026-04-09 02:26:09.337565 | orchestrator | 2026-04-09 02:26:09.337604 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:26:09.337610 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:26:09.337616 | orchestrator | 2026-04-09 02:26:09.337621 | orchestrator | 2026-04-09 02:26:09.337627 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:26:09.337632 | orchestrator | Thursday 09 April 2026 02:26:09 +0000 (0:00:00.498) 0:00:02.738 ******** 2026-04-09 02:26:09.337637 | orchestrator | =============================================================================== 2026-04-09 02:26:09.337642 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.31s 2026-04-09 02:26:09.337648 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.74s 2026-04-09 02:26:09.337653 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.50s 2026-04-09 02:26:09.693600 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-04-09 02:26:22.126379 | orchestrator | 2026-04-09 02:26:22 | INFO  | Task 7f32913c-73e1-4771-82b4-da55da11f928 (openstackclient) was prepared for execution. 2026-04-09 02:26:22.126463 | orchestrator | 2026-04-09 02:26:22 | INFO  | It takes a moment until task 7f32913c-73e1-4771-82b4-da55da11f928 (openstackclient) has been started and output is visible here. 2026-04-09 02:27:12.487115 | orchestrator | 2026-04-09 02:27:12.487249 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-09 02:27:12.487270 | orchestrator | 2026-04-09 02:27:12.487282 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-09 02:27:12.487294 | orchestrator | Thursday 09 April 2026 02:26:26 +0000 (0:00:00.279) 0:00:00.279 ******** 2026-04-09 02:27:12.487306 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-09 02:27:12.487318 | orchestrator | 2026-04-09 02:27:12.487356 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-09 02:27:12.487368 | orchestrator | Thursday 09 April 2026 02:26:27 +0000 (0:00:00.237) 0:00:00.517 ******** 2026-04-09 02:27:12.487379 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-09 02:27:12.487392 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-09 02:27:12.487403 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-09 02:27:12.487417 | orchestrator | 2026-04-09 02:27:12.487436 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-09 02:27:12.487456 | orchestrator | Thursday 09 April 2026 02:26:28 +0000 (0:00:01.338) 0:00:01.855 ******** 2026-04-09 02:27:12.487476 | orchestrator | changed: [testbed-manager] 2026-04-09 02:27:12.487495 | orchestrator | 2026-04-09 02:27:12.487513 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-09 02:27:12.487592 | orchestrator | Thursday 09 April 2026 02:26:30 +0000 (0:00:01.559) 0:00:03.415 ******** 2026-04-09 02:27:12.487616 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-09 02:27:12.487635 | orchestrator | ok: [testbed-manager] 2026-04-09 02:27:12.487655 | orchestrator | 2026-04-09 02:27:12.487668 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-09 02:27:12.487681 | orchestrator | Thursday 09 April 2026 02:27:06 +0000 (0:00:36.883) 0:00:40.298 ******** 2026-04-09 02:27:12.487693 | orchestrator | changed: [testbed-manager] 2026-04-09 02:27:12.487705 | orchestrator | 2026-04-09 02:27:12.487719 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-09 02:27:12.487732 | orchestrator | Thursday 09 April 2026 02:27:07 +0000 (0:00:00.987) 0:00:41.286 ******** 2026-04-09 02:27:12.487744 | orchestrator | ok: [testbed-manager] 2026-04-09 02:27:12.487757 | orchestrator | 2026-04-09 02:27:12.487769 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-09 02:27:12.487782 | orchestrator | Thursday 09 April 2026 02:27:08 +0000 (0:00:00.682) 0:00:41.969 ******** 2026-04-09 02:27:12.487796 | orchestrator | changed: [testbed-manager] 2026-04-09 02:27:12.487807 | orchestrator | 2026-04-09 02:27:12.487818 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-09 02:27:12.487829 | orchestrator | Thursday 09 April 2026 02:27:10 +0000 (0:00:01.618) 0:00:43.587 ******** 2026-04-09 02:27:12.487840 | orchestrator | changed: [testbed-manager] 2026-04-09 02:27:12.487851 | orchestrator | 2026-04-09 02:27:12.487862 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-09 02:27:12.487872 | orchestrator | Thursday 09 April 2026 02:27:10 +0000 (0:00:00.767) 0:00:44.355 ******** 2026-04-09 02:27:12.487883 | orchestrator | changed: [testbed-manager] 2026-04-09 02:27:12.487894 | orchestrator | 2026-04-09 02:27:12.487905 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-09 02:27:12.487916 | orchestrator | Thursday 09 April 2026 02:27:11 +0000 (0:00:00.620) 0:00:44.975 ******** 2026-04-09 02:27:12.487926 | orchestrator | ok: [testbed-manager] 2026-04-09 02:27:12.487937 | orchestrator | 2026-04-09 02:27:12.487948 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:27:12.487959 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:27:12.487970 | orchestrator | 2026-04-09 02:27:12.487981 | orchestrator | 2026-04-09 02:27:12.487992 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:27:12.488002 | orchestrator | Thursday 09 April 2026 02:27:12 +0000 (0:00:00.444) 0:00:45.419 ******** 2026-04-09 02:27:12.488014 | orchestrator | =============================================================================== 2026-04-09 02:27:12.488025 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.88s 2026-04-09 02:27:12.488036 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.62s 2026-04-09 02:27:12.488058 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.56s 2026-04-09 02:27:12.488069 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.34s 2026-04-09 02:27:12.488080 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.99s 2026-04-09 02:27:12.488090 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.77s 2026-04-09 02:27:12.488101 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.68s 2026-04-09 02:27:12.488112 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.62s 2026-04-09 02:27:12.488123 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.44s 2026-04-09 02:27:12.488133 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.24s 2026-04-09 02:27:15.055860 | orchestrator | 2026-04-09 02:27:15 | INFO  | Task 3edda766-c4fb-4a1a-a004-adfeb0acafc8 (common) was prepared for execution. 2026-04-09 02:27:15.055983 | orchestrator | 2026-04-09 02:27:15 | INFO  | It takes a moment until task 3edda766-c4fb-4a1a-a004-adfeb0acafc8 (common) has been started and output is visible here. 2026-04-09 02:27:28.291795 | orchestrator | 2026-04-09 02:27:28.291877 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-09 02:27:28.291885 | orchestrator | 2026-04-09 02:27:28.291890 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-09 02:27:28.291896 | orchestrator | Thursday 09 April 2026 02:27:19 +0000 (0:00:00.306) 0:00:00.306 ******** 2026-04-09 02:27:28.291901 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 02:27:28.291907 | orchestrator | 2026-04-09 02:27:28.291912 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-09 02:27:28.291916 | orchestrator | Thursday 09 April 2026 02:27:20 +0000 (0:00:01.427) 0:00:01.734 ******** 2026-04-09 02:27:28.291921 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 02:27:28.291926 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 02:27:28.291930 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 02:27:28.291936 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 02:27:28.291941 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 02:27:28.291945 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 02:27:28.291950 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 02:27:28.291955 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 02:27:28.291973 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 02:27:28.291978 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 02:27:28.291982 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 02:27:28.291987 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 02:27:28.291993 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 02:27:28.291998 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 02:27:28.292002 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 02:27:28.292007 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 02:27:28.292012 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 02:27:28.292032 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 02:27:28.292037 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 02:27:28.292041 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 02:27:28.292046 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 02:27:28.292050 | orchestrator | 2026-04-09 02:27:28.292055 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-09 02:27:28.292059 | orchestrator | Thursday 09 April 2026 02:27:23 +0000 (0:00:02.813) 0:00:04.547 ******** 2026-04-09 02:27:28.292064 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 02:27:28.292070 | orchestrator | 2026-04-09 02:27:28.292074 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-09 02:27:28.292081 | orchestrator | Thursday 09 April 2026 02:27:25 +0000 (0:00:01.511) 0:00:06.059 ******** 2026-04-09 02:27:28.292088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:28.292095 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:28.292113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:28.292119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:28.292124 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:28.292129 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:28.292138 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:28.292143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:28.292147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:28.292161 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:29.358970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:29.359075 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:29.359116 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:29.359128 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:29.359145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:29.359170 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:29.359180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:29.359212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:29.359223 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:29.359231 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:29.359248 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:29.359258 | orchestrator | 2026-04-09 02:27:29.359269 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-09 02:27:29.359279 | orchestrator | Thursday 09 April 2026 02:27:28 +0000 (0:00:03.627) 0:00:09.686 ******** 2026-04-09 02:27:29.359292 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 02:27:29.359302 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:29.359311 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:29.359327 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:27:29.359339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 02:27:29.359363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:30.071746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:30.071864 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:27:30.071920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 02:27:30.071932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:30.071939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:30.071946 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:27:30.071954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 02:27:30.071966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:30.071970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:30.071975 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:27:30.071993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 02:27:30.072011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:30.072018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:30.072025 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 02:27:30.072033 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:30.072040 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:30.072047 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:27:30.072054 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:27:30.072061 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 02:27:30.072069 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:31.111134 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:31.111243 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:27:31.111263 | orchestrator | 2026-04-09 02:27:31.111277 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-09 02:27:31.111290 | orchestrator | Thursday 09 April 2026 02:27:30 +0000 (0:00:01.146) 0:00:10.832 ******** 2026-04-09 02:27:31.111305 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 02:27:31.111319 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:31.111331 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:31.111363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 02:27:31.111380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:31.111416 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:27:31.111429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:31.111440 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:27:31.111480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 02:27:31.111493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:31.111504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:31.111514 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:27:31.111559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 02:27:31.111572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:31.111587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:31.111608 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:27:31.111619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 02:27:31.111679 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:36.458381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:36.458474 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:27:36.458488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 02:27:36.458500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:36.458509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:36.458558 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:27:36.458569 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 02:27:36.458598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:36.458608 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:36.458616 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:27:36.458624 | orchestrator | 2026-04-09 02:27:36.458633 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-09 02:27:36.458643 | orchestrator | Thursday 09 April 2026 02:27:32 +0000 (0:00:02.024) 0:00:12.857 ******** 2026-04-09 02:27:36.458658 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:27:36.458671 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:27:36.458684 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:27:36.458697 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:27:36.458729 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:27:36.458744 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:27:36.458758 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:27:36.458772 | orchestrator | 2026-04-09 02:27:36.458785 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-09 02:27:36.458798 | orchestrator | Thursday 09 April 2026 02:27:32 +0000 (0:00:00.711) 0:00:13.569 ******** 2026-04-09 02:27:36.458812 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:27:36.458826 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:27:36.458839 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:27:36.458854 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:27:36.458868 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:27:36.458882 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:27:36.458896 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:27:36.458909 | orchestrator | 2026-04-09 02:27:36.458924 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-09 02:27:36.458941 | orchestrator | Thursday 09 April 2026 02:27:33 +0000 (0:00:00.979) 0:00:14.548 ******** 2026-04-09 02:27:36.458958 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:36.458996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:36.459024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:36.459046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:36.459063 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:36.459078 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:36.459108 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:39.433124 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:39.433218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:39.433249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:39.433270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:39.433277 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:39.433283 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:39.433311 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:39.433319 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:39.433327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:39.433339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:39.433345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:39.433352 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:39.433358 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:39.433365 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:39.433372 | orchestrator | 2026-04-09 02:27:39.433379 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-09 02:27:39.433387 | orchestrator | Thursday 09 April 2026 02:27:37 +0000 (0:00:03.614) 0:00:18.162 ******** 2026-04-09 02:27:39.433393 | orchestrator | [WARNING]: Skipped 2026-04-09 02:27:39.433400 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-09 02:27:39.433409 | orchestrator | to this access issue: 2026-04-09 02:27:39.433416 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-09 02:27:39.433421 | orchestrator | directory 2026-04-09 02:27:39.433428 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 02:27:39.433435 | orchestrator | 2026-04-09 02:27:39.433441 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-09 02:27:39.433447 | orchestrator | Thursday 09 April 2026 02:27:38 +0000 (0:00:00.999) 0:00:19.162 ******** 2026-04-09 02:27:39.433453 | orchestrator | [WARNING]: Skipped 2026-04-09 02:27:39.433463 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-09 02:27:50.666742 | orchestrator | to this access issue: 2026-04-09 02:27:50.666836 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-09 02:27:50.666849 | orchestrator | directory 2026-04-09 02:27:50.666857 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 02:27:50.666866 | orchestrator | 2026-04-09 02:27:50.666874 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-09 02:27:50.666883 | orchestrator | Thursday 09 April 2026 02:27:39 +0000 (0:00:01.341) 0:00:20.504 ******** 2026-04-09 02:27:50.666911 | orchestrator | [WARNING]: Skipped 2026-04-09 02:27:50.666919 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-09 02:27:50.666926 | orchestrator | to this access issue: 2026-04-09 02:27:50.666934 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-09 02:27:50.666941 | orchestrator | directory 2026-04-09 02:27:50.666948 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 02:27:50.666956 | orchestrator | 2026-04-09 02:27:50.666963 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-09 02:27:50.666970 | orchestrator | Thursday 09 April 2026 02:27:40 +0000 (0:00:00.893) 0:00:21.398 ******** 2026-04-09 02:27:50.666978 | orchestrator | [WARNING]: Skipped 2026-04-09 02:27:50.666985 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-09 02:27:50.666992 | orchestrator | to this access issue: 2026-04-09 02:27:50.666999 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-09 02:27:50.667006 | orchestrator | directory 2026-04-09 02:27:50.667013 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 02:27:50.667020 | orchestrator | 2026-04-09 02:27:50.667028 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-09 02:27:50.667035 | orchestrator | Thursday 09 April 2026 02:27:41 +0000 (0:00:00.972) 0:00:22.370 ******** 2026-04-09 02:27:50.667042 | orchestrator | changed: [testbed-manager] 2026-04-09 02:27:50.667049 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:27:50.667056 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:27:50.667064 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:27:50.667071 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:27:50.667078 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:27:50.667100 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:27:50.667108 | orchestrator | 2026-04-09 02:27:50.667115 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-09 02:27:50.667122 | orchestrator | Thursday 09 April 2026 02:27:44 +0000 (0:00:02.742) 0:00:25.112 ******** 2026-04-09 02:27:50.667130 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 02:27:50.667138 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 02:27:50.667146 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 02:27:50.667153 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 02:27:50.667160 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 02:27:50.667167 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 02:27:50.667177 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 02:27:50.667185 | orchestrator | 2026-04-09 02:27:50.667192 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-09 02:27:50.667200 | orchestrator | Thursday 09 April 2026 02:27:46 +0000 (0:00:02.348) 0:00:27.461 ******** 2026-04-09 02:27:50.667207 | orchestrator | changed: [testbed-manager] 2026-04-09 02:27:50.667214 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:27:50.667221 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:27:50.667228 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:27:50.667236 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:27:50.667243 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:27:50.667250 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:27:50.667257 | orchestrator | 2026-04-09 02:27:50.667264 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-09 02:27:50.667277 | orchestrator | Thursday 09 April 2026 02:27:48 +0000 (0:00:02.066) 0:00:29.528 ******** 2026-04-09 02:27:50.667287 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:50.667311 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:50.667322 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:50.667330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:50.667339 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:50.667347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:50.667366 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:50.667383 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:50.667392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:50.667407 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:56.648357 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:56.648472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:56.648490 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:56.648505 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:56.648599 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:56.648636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:56.648648 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:56.648688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:27:56.648701 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:56.648713 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:56.648725 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:56.648736 | orchestrator | 2026-04-09 02:27:56.648749 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-09 02:27:56.648762 | orchestrator | Thursday 09 April 2026 02:27:50 +0000 (0:00:01.892) 0:00:31.420 ******** 2026-04-09 02:27:56.648772 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 02:27:56.648784 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 02:27:56.648804 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 02:27:56.648815 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 02:27:56.648826 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 02:27:56.648836 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 02:27:56.648847 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 02:27:56.648858 | orchestrator | 2026-04-09 02:27:56.648868 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-09 02:27:56.648879 | orchestrator | Thursday 09 April 2026 02:27:52 +0000 (0:00:02.029) 0:00:33.449 ******** 2026-04-09 02:27:56.648894 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 02:27:56.648907 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 02:27:56.648920 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 02:27:56.648968 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 02:27:56.648982 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 02:27:56.648994 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 02:27:56.649006 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 02:27:56.649019 | orchestrator | 2026-04-09 02:27:56.649031 | orchestrator | TASK [common : Check common containers] **************************************** 2026-04-09 02:27:56.649043 | orchestrator | Thursday 09 April 2026 02:27:54 +0000 (0:00:01.775) 0:00:35.225 ******** 2026-04-09 02:27:56.649056 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:56.649079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:57.125292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:57.125401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:57.125442 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:57.125468 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:57.125481 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 02:27:57.125492 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:57.125504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:57.125601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:57.125624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:57.125652 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:57.125670 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:57.125681 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:57.125693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:57.125706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:27:57.125728 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:29:26.907549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:29:26.907688 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:29:26.907707 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:29:26.907730 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:29:26.907741 | orchestrator | 2026-04-09 02:29:26.907748 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-09 02:29:26.907755 | orchestrator | Thursday 09 April 2026 02:27:57 +0000 (0:00:02.656) 0:00:37.882 ******** 2026-04-09 02:29:26.907761 | orchestrator | changed: [testbed-manager] 2026-04-09 02:29:26.907768 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:29:26.907774 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:29:26.907780 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:29:26.907786 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:29:26.907791 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:29:26.907797 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:29:26.907803 | orchestrator | 2026-04-09 02:29:26.907809 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-09 02:29:26.907815 | orchestrator | Thursday 09 April 2026 02:27:58 +0000 (0:00:01.417) 0:00:39.299 ******** 2026-04-09 02:29:26.907820 | orchestrator | changed: [testbed-manager] 2026-04-09 02:29:26.907826 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:29:26.907832 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:29:26.907838 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:29:26.907843 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:29:26.907849 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:29:26.907855 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:29:26.907863 | orchestrator | 2026-04-09 02:29:26.907872 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 02:29:26.907881 | orchestrator | Thursday 09 April 2026 02:27:59 +0000 (0:00:01.139) 0:00:40.439 ******** 2026-04-09 02:29:26.907887 | orchestrator | 2026-04-09 02:29:26.907893 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 02:29:26.907899 | orchestrator | Thursday 09 April 2026 02:27:59 +0000 (0:00:00.066) 0:00:40.506 ******** 2026-04-09 02:29:26.907904 | orchestrator | 2026-04-09 02:29:26.907910 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 02:29:26.907916 | orchestrator | Thursday 09 April 2026 02:27:59 +0000 (0:00:00.069) 0:00:40.575 ******** 2026-04-09 02:29:26.907921 | orchestrator | 2026-04-09 02:29:26.907928 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 02:29:26.907938 | orchestrator | Thursday 09 April 2026 02:27:59 +0000 (0:00:00.066) 0:00:40.641 ******** 2026-04-09 02:29:26.907948 | orchestrator | 2026-04-09 02:29:26.907957 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 02:29:26.907974 | orchestrator | Thursday 09 April 2026 02:28:00 +0000 (0:00:00.260) 0:00:40.901 ******** 2026-04-09 02:29:26.907983 | orchestrator | 2026-04-09 02:29:26.907992 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 02:29:26.908001 | orchestrator | Thursday 09 April 2026 02:28:00 +0000 (0:00:00.088) 0:00:40.990 ******** 2026-04-09 02:29:26.908010 | orchestrator | 2026-04-09 02:29:26.908019 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 02:29:26.908028 | orchestrator | Thursday 09 April 2026 02:28:00 +0000 (0:00:00.069) 0:00:41.059 ******** 2026-04-09 02:29:26.908037 | orchestrator | 2026-04-09 02:29:26.908046 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-09 02:29:26.908054 | orchestrator | Thursday 09 April 2026 02:28:00 +0000 (0:00:00.101) 0:00:41.161 ******** 2026-04-09 02:29:26.908064 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:29:26.908074 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:29:26.908084 | orchestrator | changed: [testbed-manager] 2026-04-09 02:29:26.908094 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:29:26.908105 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:29:26.908134 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:29:26.908146 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:29:26.908155 | orchestrator | 2026-04-09 02:29:26.908164 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-09 02:29:26.908174 | orchestrator | Thursday 09 April 2026 02:28:40 +0000 (0:00:40.229) 0:01:21.391 ******** 2026-04-09 02:29:26.908184 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:29:26.908193 | orchestrator | changed: [testbed-manager] 2026-04-09 02:29:26.908202 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:29:26.908211 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:29:26.908219 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:29:26.908229 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:29:26.908238 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:29:26.908248 | orchestrator | 2026-04-09 02:29:26.908258 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-09 02:29:26.908268 | orchestrator | Thursday 09 April 2026 02:29:16 +0000 (0:00:35.611) 0:01:57.003 ******** 2026-04-09 02:29:26.908278 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:29:26.908288 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:29:26.908298 | orchestrator | ok: [testbed-manager] 2026-04-09 02:29:26.908308 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:29:26.908318 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:29:26.908327 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:29:26.908337 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:29:26.908345 | orchestrator | 2026-04-09 02:29:26.908351 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-09 02:29:26.908357 | orchestrator | Thursday 09 April 2026 02:29:18 +0000 (0:00:01.933) 0:01:58.937 ******** 2026-04-09 02:29:26.908363 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:29:26.908369 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:29:26.908374 | orchestrator | changed: [testbed-manager] 2026-04-09 02:29:26.908380 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:29:26.908386 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:29:26.908391 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:29:26.908397 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:29:26.908403 | orchestrator | 2026-04-09 02:29:26.908408 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:29:26.908415 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 02:29:26.908423 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 02:29:26.908444 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 02:29:26.908456 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 02:29:26.908525 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 02:29:26.908534 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 02:29:26.908540 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 02:29:26.908546 | orchestrator | 2026-04-09 02:29:26.908552 | orchestrator | 2026-04-09 02:29:26.908557 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:29:26.908563 | orchestrator | Thursday 09 April 2026 02:29:26 +0000 (0:00:08.695) 0:02:07.632 ******** 2026-04-09 02:29:26.908569 | orchestrator | =============================================================================== 2026-04-09 02:29:26.908575 | orchestrator | common : Restart fluentd container ------------------------------------- 40.23s 2026-04-09 02:29:26.908581 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 35.61s 2026-04-09 02:29:26.908586 | orchestrator | common : Restart cron container ----------------------------------------- 8.70s 2026-04-09 02:29:26.908592 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.63s 2026-04-09 02:29:26.908598 | orchestrator | common : Copying over config.json files for services -------------------- 3.61s 2026-04-09 02:29:26.908603 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.81s 2026-04-09 02:29:26.908609 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.74s 2026-04-09 02:29:26.908615 | orchestrator | common : Check common containers ---------------------------------------- 2.66s 2026-04-09 02:29:26.908620 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.35s 2026-04-09 02:29:26.908626 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.07s 2026-04-09 02:29:26.908632 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.03s 2026-04-09 02:29:26.908638 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.02s 2026-04-09 02:29:26.908643 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.93s 2026-04-09 02:29:26.908649 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.89s 2026-04-09 02:29:26.908655 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.78s 2026-04-09 02:29:26.908661 | orchestrator | common : include_tasks -------------------------------------------------- 1.51s 2026-04-09 02:29:26.908674 | orchestrator | common : include_tasks -------------------------------------------------- 1.43s 2026-04-09 02:29:27.406594 | orchestrator | common : Creating log volume -------------------------------------------- 1.42s 2026-04-09 02:29:27.406681 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.34s 2026-04-09 02:29:27.406690 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.15s 2026-04-09 02:29:30.041240 | orchestrator | 2026-04-09 02:29:30 | INFO  | Task 7a5d6f8b-db98-4ab1-8738-31eaf39e5c53 (loadbalancer) was prepared for execution. 2026-04-09 02:29:30.041318 | orchestrator | 2026-04-09 02:29:30 | INFO  | It takes a moment until task 7a5d6f8b-db98-4ab1-8738-31eaf39e5c53 (loadbalancer) has been started and output is visible here. 2026-04-09 02:29:45.926713 | orchestrator | 2026-04-09 02:29:45.926825 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 02:29:45.926850 | orchestrator | 2026-04-09 02:29:45.926872 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 02:29:45.926915 | orchestrator | Thursday 09 April 2026 02:29:34 +0000 (0:00:00.312) 0:00:00.312 ******** 2026-04-09 02:29:45.926930 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:29:45.926947 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:29:45.926962 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:29:45.926978 | orchestrator | 2026-04-09 02:29:45.926992 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 02:29:45.927008 | orchestrator | Thursday 09 April 2026 02:29:35 +0000 (0:00:00.353) 0:00:00.666 ******** 2026-04-09 02:29:45.927024 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-09 02:29:45.927038 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-09 02:29:45.927053 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-09 02:29:45.927065 | orchestrator | 2026-04-09 02:29:45.927080 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-09 02:29:45.927095 | orchestrator | 2026-04-09 02:29:45.927109 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-09 02:29:45.927124 | orchestrator | Thursday 09 April 2026 02:29:35 +0000 (0:00:00.486) 0:00:01.152 ******** 2026-04-09 02:29:45.927155 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:29:45.927171 | orchestrator | 2026-04-09 02:29:45.927180 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-09 02:29:45.927189 | orchestrator | Thursday 09 April 2026 02:29:36 +0000 (0:00:00.596) 0:00:01.748 ******** 2026-04-09 02:29:45.927198 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:29:45.927206 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:29:45.927215 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:29:45.927223 | orchestrator | 2026-04-09 02:29:45.927232 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-09 02:29:45.927243 | orchestrator | Thursday 09 April 2026 02:29:36 +0000 (0:00:00.630) 0:00:02.378 ******** 2026-04-09 02:29:45.927253 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:29:45.927263 | orchestrator | 2026-04-09 02:29:45.927272 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-09 02:29:45.927282 | orchestrator | Thursday 09 April 2026 02:29:37 +0000 (0:00:00.810) 0:00:03.189 ******** 2026-04-09 02:29:45.927293 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:29:45.927303 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:29:45.927312 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:29:45.927320 | orchestrator | 2026-04-09 02:29:45.927329 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-09 02:29:45.927338 | orchestrator | Thursday 09 April 2026 02:29:38 +0000 (0:00:00.634) 0:00:03.823 ******** 2026-04-09 02:29:45.927346 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-09 02:29:45.927355 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-09 02:29:45.927364 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-09 02:29:45.927372 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-09 02:29:45.927381 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-09 02:29:45.927389 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-09 02:29:45.927398 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-09 02:29:45.927408 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-09 02:29:45.927416 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-09 02:29:45.927427 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-09 02:29:45.927500 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-09 02:29:45.927520 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-09 02:29:45.927533 | orchestrator | 2026-04-09 02:29:45.927549 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-09 02:29:45.927566 | orchestrator | Thursday 09 April 2026 02:29:41 +0000 (0:00:03.084) 0:00:06.908 ******** 2026-04-09 02:29:45.927583 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-09 02:29:45.927601 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-09 02:29:45.927617 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-09 02:29:45.927633 | orchestrator | 2026-04-09 02:29:45.927650 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-09 02:29:45.927666 | orchestrator | Thursday 09 April 2026 02:29:42 +0000 (0:00:00.760) 0:00:07.668 ******** 2026-04-09 02:29:45.927682 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-09 02:29:45.927691 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-09 02:29:45.927699 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-09 02:29:45.927708 | orchestrator | 2026-04-09 02:29:45.927716 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-09 02:29:45.927726 | orchestrator | Thursday 09 April 2026 02:29:43 +0000 (0:00:01.240) 0:00:08.908 ******** 2026-04-09 02:29:45.927741 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-09 02:29:45.927756 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:29:45.927794 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-09 02:29:45.927810 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:29:45.927824 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-09 02:29:45.927837 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:29:45.927850 | orchestrator | 2026-04-09 02:29:45.927866 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-09 02:29:45.927880 | orchestrator | Thursday 09 April 2026 02:29:44 +0000 (0:00:00.570) 0:00:09.479 ******** 2026-04-09 02:29:45.927899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 02:29:45.927931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 02:29:45.927949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 02:29:45.927977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 02:29:45.927995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 02:29:45.928022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 02:29:51.337182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 02:29:51.337277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 02:29:51.337288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 02:29:51.337293 | orchestrator | 2026-04-09 02:29:51.337298 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-09 02:29:51.337304 | orchestrator | Thursday 09 April 2026 02:29:45 +0000 (0:00:01.847) 0:00:11.327 ******** 2026-04-09 02:29:51.337325 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:29:51.337331 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:29:51.337334 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:29:51.337339 | orchestrator | 2026-04-09 02:29:51.337343 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-09 02:29:51.337346 | orchestrator | Thursday 09 April 2026 02:29:46 +0000 (0:00:00.906) 0:00:12.233 ******** 2026-04-09 02:29:51.337351 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-09 02:29:51.337355 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-09 02:29:51.337359 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-09 02:29:51.337362 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-09 02:29:51.337366 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-09 02:29:51.337370 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-09 02:29:51.337373 | orchestrator | 2026-04-09 02:29:51.337377 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-09 02:29:51.337381 | orchestrator | Thursday 09 April 2026 02:29:48 +0000 (0:00:01.506) 0:00:13.740 ******** 2026-04-09 02:29:51.337385 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:29:51.337389 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:29:51.337392 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:29:51.337396 | orchestrator | 2026-04-09 02:29:51.337400 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-09 02:29:51.337404 | orchestrator | Thursday 09 April 2026 02:29:49 +0000 (0:00:00.902) 0:00:14.643 ******** 2026-04-09 02:29:51.337407 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:29:51.337411 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:29:51.337415 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:29:51.337419 | orchestrator | 2026-04-09 02:29:51.337424 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-09 02:29:51.337430 | orchestrator | Thursday 09 April 2026 02:29:50 +0000 (0:00:01.375) 0:00:16.018 ******** 2026-04-09 02:29:51.337437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 02:29:51.337516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:29:51.337526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:29:51.337532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a7c02665e571ae117d1d519fea32f918e5c6f809', '__omit_place_holder__a7c02665e571ae117d1d519fea32f918e5c6f809'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 02:29:51.337542 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:29:51.337546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 02:29:51.337583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:29:51.337591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:29:51.337599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a7c02665e571ae117d1d519fea32f918e5c6f809', '__omit_place_holder__a7c02665e571ae117d1d519fea32f918e5c6f809'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 02:29:51.337603 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:29:51.337611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 02:29:54.371085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:29:54.371180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:29:54.371193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a7c02665e571ae117d1d519fea32f918e5c6f809', '__omit_place_holder__a7c02665e571ae117d1d519fea32f918e5c6f809'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 02:29:54.371204 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:29:54.371215 | orchestrator | 2026-04-09 02:29:54.371225 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-09 02:29:54.371235 | orchestrator | Thursday 09 April 2026 02:29:51 +0000 (0:00:00.718) 0:00:16.737 ******** 2026-04-09 02:29:54.371245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 02:29:54.371255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 02:29:54.371264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 02:29:54.371328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 02:29:54.371342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:29:54.371354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a7c02665e571ae117d1d519fea32f918e5c6f809', '__omit_place_holder__a7c02665e571ae117d1d519fea32f918e5c6f809'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 02:29:54.371366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 02:29:54.371377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:29:54.371388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a7c02665e571ae117d1d519fea32f918e5c6f809', '__omit_place_holder__a7c02665e571ae117d1d519fea32f918e5c6f809'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 02:29:54.371423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 02:30:02.936027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:30:02.936149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a7c02665e571ae117d1d519fea32f918e5c6f809', '__omit_place_holder__a7c02665e571ae117d1d519fea32f918e5c6f809'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 02:30:02.936166 | orchestrator | 2026-04-09 02:30:02.936179 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-09 02:30:02.936193 | orchestrator | Thursday 09 April 2026 02:29:54 +0000 (0:00:03.034) 0:00:19.771 ******** 2026-04-09 02:30:02.936204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 02:30:02.936217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 02:30:02.936229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 02:30:02.936265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 02:30:02.936311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 02:30:02.936324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 02:30:02.936335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 02:30:02.936347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 02:30:02.936358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 02:30:02.936369 | orchestrator | 2026-04-09 02:30:02.936380 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-09 02:30:02.936391 | orchestrator | Thursday 09 April 2026 02:29:57 +0000 (0:00:03.203) 0:00:22.975 ******** 2026-04-09 02:30:02.936411 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-09 02:30:02.936423 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-09 02:30:02.936434 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-09 02:30:02.936495 | orchestrator | 2026-04-09 02:30:02.936507 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-09 02:30:02.936518 | orchestrator | Thursday 09 April 2026 02:29:59 +0000 (0:00:01.814) 0:00:24.789 ******** 2026-04-09 02:30:02.936529 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-09 02:30:02.936539 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-09 02:30:02.936550 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-09 02:30:02.936561 | orchestrator | 2026-04-09 02:30:02.936572 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-09 02:30:02.936582 | orchestrator | Thursday 09 April 2026 02:30:02 +0000 (0:00:02.987) 0:00:27.777 ******** 2026-04-09 02:30:02.936593 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:30:02.936606 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:30:02.936616 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:30:02.936628 | orchestrator | 2026-04-09 02:30:02.936647 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-09 02:30:15.158891 | orchestrator | Thursday 09 April 2026 02:30:02 +0000 (0:00:00.567) 0:00:28.345 ******** 2026-04-09 02:30:15.159059 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-09 02:30:15.159101 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-09 02:30:15.159122 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-09 02:30:15.159140 | orchestrator | 2026-04-09 02:30:15.159159 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-09 02:30:15.159178 | orchestrator | Thursday 09 April 2026 02:30:05 +0000 (0:00:02.380) 0:00:30.725 ******** 2026-04-09 02:30:15.159195 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-09 02:30:15.159206 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-09 02:30:15.159216 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-09 02:30:15.159226 | orchestrator | 2026-04-09 02:30:15.159235 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-09 02:30:15.159245 | orchestrator | Thursday 09 April 2026 02:30:07 +0000 (0:00:02.295) 0:00:33.021 ******** 2026-04-09 02:30:15.159255 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-09 02:30:15.159266 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-09 02:30:15.159276 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-09 02:30:15.159285 | orchestrator | 2026-04-09 02:30:15.159308 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-09 02:30:15.159319 | orchestrator | Thursday 09 April 2026 02:30:09 +0000 (0:00:01.408) 0:00:34.430 ******** 2026-04-09 02:30:15.159329 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-09 02:30:15.159339 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-09 02:30:15.159349 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-09 02:30:15.159358 | orchestrator | 2026-04-09 02:30:15.159391 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-09 02:30:15.159401 | orchestrator | Thursday 09 April 2026 02:30:10 +0000 (0:00:01.491) 0:00:35.922 ******** 2026-04-09 02:30:15.159413 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:30:15.159424 | orchestrator | 2026-04-09 02:30:15.159471 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-04-09 02:30:15.159483 | orchestrator | Thursday 09 April 2026 02:30:11 +0000 (0:00:00.655) 0:00:36.577 ******** 2026-04-09 02:30:15.159497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 02:30:15.159513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 02:30:15.159530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 02:30:15.159564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 02:30:15.159576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 02:30:15.159587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 02:30:15.159607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 02:30:15.159619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 02:30:15.159630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 02:30:15.159642 | orchestrator | 2026-04-09 02:30:15.159653 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-04-09 02:30:15.159664 | orchestrator | Thursday 09 April 2026 02:30:14 +0000 (0:00:03.362) 0:00:39.939 ******** 2026-04-09 02:30:15.159690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 02:30:16.037826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:30:16.037897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:30:16.037921 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:30:16.037928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 02:30:16.037933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:30:16.037938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:30:16.037942 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:30:16.037946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 02:30:16.037977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:30:16.037983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:30:16.037991 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:30:16.038006 | orchestrator | 2026-04-09 02:30:16.038011 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-04-09 02:30:16.038060 | orchestrator | Thursday 09 April 2026 02:30:15 +0000 (0:00:00.628) 0:00:40.568 ******** 2026-04-09 02:30:16.038065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 02:30:16.038070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:30:16.038074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:30:16.038078 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:30:16.038083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 02:30:16.038094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:30:16.934545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:30:16.934662 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:30:16.934676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 02:30:16.934686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:30:16.934693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:30:16.934700 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:30:16.934708 | orchestrator | 2026-04-09 02:30:16.934717 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-09 02:30:16.934726 | orchestrator | Thursday 09 April 2026 02:30:16 +0000 (0:00:00.875) 0:00:41.444 ******** 2026-04-09 02:30:16.934731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 02:30:16.934735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:30:16.934753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:30:16.934762 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:30:16.934766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 02:30:16.934771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:30:16.934775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:30:16.934779 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:30:16.934783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 02:30:16.934798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:30:16.934814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:30:16.934825 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:30:18.399601 | orchestrator | 2026-04-09 02:30:18.399703 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-09 02:30:18.399720 | orchestrator | Thursday 09 April 2026 02:30:16 +0000 (0:00:00.886) 0:00:42.331 ******** 2026-04-09 02:30:18.399736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 02:30:18.399752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:30:18.399766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:30:18.399776 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:30:18.399787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 02:30:18.399797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:30:18.399825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:30:18.399857 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:30:18.399888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 02:30:18.399900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:30:18.399911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:30:18.399922 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:30:18.399932 | orchestrator | 2026-04-09 02:30:18.399943 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-09 02:30:18.399953 | orchestrator | Thursday 09 April 2026 02:30:17 +0000 (0:00:00.640) 0:00:42.971 ******** 2026-04-09 02:30:18.399963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 02:30:18.399974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:30:18.399997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:30:18.400007 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:30:18.400030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 02:30:19.531895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:30:19.531980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:30:19.531990 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:30:19.531998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 02:30:19.532005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:30:19.532011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:30:19.532037 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:30:19.532044 | orchestrator | 2026-04-09 02:30:19.532051 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-04-09 02:30:19.532057 | orchestrator | Thursday 09 April 2026 02:30:18 +0000 (0:00:00.836) 0:00:43.807 ******** 2026-04-09 02:30:19.532075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 02:30:19.532095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:30:19.532101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:30:19.532108 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:30:19.532114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 02:30:19.532120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:30:19.532131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:30:19.532137 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:30:19.532147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 02:30:19.532157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:30:20.995692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:30:20.995802 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:30:20.995819 | orchestrator | 2026-04-09 02:30:20.995832 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-04-09 02:30:20.995843 | orchestrator | Thursday 09 April 2026 02:30:19 +0000 (0:00:01.122) 0:00:44.930 ******** 2026-04-09 02:30:20.995857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 02:30:20.995870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:30:20.995906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:30:20.995919 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:30:20.995931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 02:30:20.995951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:30:20.995973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:30:20.995980 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:30:20.995987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 02:30:20.995993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:30:20.996005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:30:20.996012 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:30:20.996018 | orchestrator | 2026-04-09 02:30:20.996024 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-04-09 02:30:20.996031 | orchestrator | Thursday 09 April 2026 02:30:20 +0000 (0:00:00.650) 0:00:45.581 ******** 2026-04-09 02:30:20.996037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 02:30:20.996044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:30:20.996061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:30:27.695360 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:30:27.695488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 02:30:27.695500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:30:27.695522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:30:27.695528 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:30:27.695533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 02:30:27.695539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 02:30:27.695554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 02:30:27.695559 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:30:27.695564 | orchestrator | 2026-04-09 02:30:27.695570 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-09 02:30:27.695576 | orchestrator | Thursday 09 April 2026 02:30:20 +0000 (0:00:00.815) 0:00:46.397 ******** 2026-04-09 02:30:27.695581 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-09 02:30:27.695599 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-09 02:30:27.695604 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-09 02:30:27.695609 | orchestrator | 2026-04-09 02:30:27.695614 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-09 02:30:27.695619 | orchestrator | Thursday 09 April 2026 02:30:22 +0000 (0:00:01.697) 0:00:48.094 ******** 2026-04-09 02:30:27.695624 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-09 02:30:27.695630 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-09 02:30:27.695635 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-09 02:30:27.695640 | orchestrator | 2026-04-09 02:30:27.695650 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-09 02:30:27.695655 | orchestrator | Thursday 09 April 2026 02:30:24 +0000 (0:00:01.744) 0:00:49.839 ******** 2026-04-09 02:30:27.695659 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 02:30:27.695664 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 02:30:27.695669 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 02:30:27.695674 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 02:30:27.695679 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:30:27.695683 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 02:30:27.695688 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:30:27.695693 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 02:30:27.695698 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:30:27.695702 | orchestrator | 2026-04-09 02:30:27.695707 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-04-09 02:30:27.695712 | orchestrator | Thursday 09 April 2026 02:30:25 +0000 (0:00:00.881) 0:00:50.720 ******** 2026-04-09 02:30:27.695717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 02:30:27.695723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 02:30:27.695732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 02:30:27.695742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 02:30:32.360659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 02:30:32.360749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 02:30:32.360759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 02:30:32.360765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 02:30:32.360770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 02:30:32.360774 | orchestrator | 2026-04-09 02:30:32.360780 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-09 02:30:32.360799 | orchestrator | Thursday 09 April 2026 02:30:27 +0000 (0:00:02.380) 0:00:53.100 ******** 2026-04-09 02:30:32.360804 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:30:32.360823 | orchestrator | 2026-04-09 02:30:32.360828 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-09 02:30:32.360832 | orchestrator | Thursday 09 April 2026 02:30:28 +0000 (0:00:00.853) 0:00:53.954 ******** 2026-04-09 02:30:32.360858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-09 02:30:32.360901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 02:30:32.360907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 02:30:32.360912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 02:30:32.360916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-09 02:30:32.360925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 02:30:32.360946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-09 02:30:33.047393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 02:30:33.047515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 02:30:33.047523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 02:30:33.047528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 02:30:33.047549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 02:30:33.047555 | orchestrator | 2026-04-09 02:30:33.047560 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-09 02:30:33.047565 | orchestrator | Thursday 09 April 2026 02:30:32 +0000 (0:00:03.813) 0:00:57.767 ******** 2026-04-09 02:30:33.047584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-09 02:30:33.047597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 02:30:33.047602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 02:30:33.047606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 02:30:33.047610 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:30:33.047615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-09 02:30:33.047623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 02:30:33.047636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 02:30:33.047644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 02:30:42.112704 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:30:42.112795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-09 02:30:42.112805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 02:30:42.112811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 02:30:42.112815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 02:30:42.112836 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:30:42.112841 | orchestrator | 2026-04-09 02:30:42.112846 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-09 02:30:42.112851 | orchestrator | Thursday 09 April 2026 02:30:33 +0000 (0:00:00.688) 0:00:58.455 ******** 2026-04-09 02:30:42.112856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-09 02:30:42.112862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-09 02:30:42.112868 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:30:42.112884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-09 02:30:42.112888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-09 02:30:42.112892 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:30:42.112896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-09 02:30:42.112911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-09 02:30:42.112915 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:30:42.112919 | orchestrator | 2026-04-09 02:30:42.112923 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-09 02:30:42.112926 | orchestrator | Thursday 09 April 2026 02:30:34 +0000 (0:00:01.158) 0:00:59.614 ******** 2026-04-09 02:30:42.112930 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:30:42.112934 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:30:42.112938 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:30:42.112941 | orchestrator | 2026-04-09 02:30:42.112946 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-09 02:30:42.112949 | orchestrator | Thursday 09 April 2026 02:30:35 +0000 (0:00:01.326) 0:01:00.940 ******** 2026-04-09 02:30:42.112953 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:30:42.112957 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:30:42.112961 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:30:42.112964 | orchestrator | 2026-04-09 02:30:42.112968 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-09 02:30:42.112972 | orchestrator | Thursday 09 April 2026 02:30:37 +0000 (0:00:02.081) 0:01:03.022 ******** 2026-04-09 02:30:42.112976 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:30:42.112996 | orchestrator | 2026-04-09 02:30:42.113000 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-09 02:30:42.113004 | orchestrator | Thursday 09 April 2026 02:30:38 +0000 (0:00:00.810) 0:01:03.832 ******** 2026-04-09 02:30:42.113009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 02:30:42.113023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 02:30:42.113029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 02:30:42.113036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 02:30:42.789348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 02:30:42.789473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 02:30:42.789499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 02:30:42.789513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 02:30:42.789518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 02:30:42.789522 | orchestrator | 2026-04-09 02:30:42.789528 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-09 02:30:42.789533 | orchestrator | Thursday 09 April 2026 02:30:42 +0000 (0:00:03.687) 0:01:07.520 ******** 2026-04-09 02:30:42.789550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-09 02:30:42.789555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 02:30:42.789562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 02:30:42.789566 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:30:42.789574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-09 02:30:42.789578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 02:30:42.789582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 02:30:42.789585 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:30:42.789593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-09 02:30:52.825966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 02:30:52.826143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 02:30:52.826163 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:30:52.826178 | orchestrator | 2026-04-09 02:30:52.826192 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-09 02:30:52.826206 | orchestrator | Thursday 09 April 2026 02:30:42 +0000 (0:00:00.675) 0:01:08.196 ******** 2026-04-09 02:30:52.826237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-09 02:30:52.826252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-09 02:30:52.826267 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:30:52.826281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-09 02:30:52.826294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-09 02:30:52.826307 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:30:52.826320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-09 02:30:52.826333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-09 02:30:52.826346 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:30:52.826359 | orchestrator | 2026-04-09 02:30:52.826371 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-09 02:30:52.826383 | orchestrator | Thursday 09 April 2026 02:30:43 +0000 (0:00:00.969) 0:01:09.165 ******** 2026-04-09 02:30:52.826395 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:30:52.826409 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:30:52.826486 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:30:52.826500 | orchestrator | 2026-04-09 02:30:52.826514 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-09 02:30:52.826527 | orchestrator | Thursday 09 April 2026 02:30:45 +0000 (0:00:01.628) 0:01:10.793 ******** 2026-04-09 02:30:52.826566 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:30:52.826580 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:30:52.826605 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:30:52.826619 | orchestrator | 2026-04-09 02:30:52.826633 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-09 02:30:52.826645 | orchestrator | Thursday 09 April 2026 02:30:47 +0000 (0:00:02.016) 0:01:12.810 ******** 2026-04-09 02:30:52.826658 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:30:52.826671 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:30:52.826683 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:30:52.826696 | orchestrator | 2026-04-09 02:30:52.826709 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-09 02:30:52.826722 | orchestrator | Thursday 09 April 2026 02:30:47 +0000 (0:00:00.335) 0:01:13.146 ******** 2026-04-09 02:30:52.826735 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:30:52.826748 | orchestrator | 2026-04-09 02:30:52.826762 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-09 02:30:52.826796 | orchestrator | Thursday 09 April 2026 02:30:48 +0000 (0:00:00.772) 0:01:13.918 ******** 2026-04-09 02:30:52.826812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-04-09 02:30:52.826834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-04-09 02:30:52.826847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-04-09 02:30:52.826859 | orchestrator | 2026-04-09 02:30:52.826871 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-09 02:30:52.826885 | orchestrator | Thursday 09 April 2026 02:30:51 +0000 (0:00:02.803) 0:01:16.722 ******** 2026-04-09 02:30:52.826906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-04-09 02:30:52.826919 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:30:52.826941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-04-09 02:31:01.211123 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:31:01.211215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-04-09 02:31:01.211226 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:31:01.211233 | orchestrator | 2026-04-09 02:31:01.211241 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-09 02:31:01.211249 | orchestrator | Thursday 09 April 2026 02:30:52 +0000 (0:00:01.510) 0:01:18.232 ******** 2026-04-09 02:31:01.211272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-09 02:31:01.211281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-09 02:31:01.211289 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:31:01.211315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-09 02:31:01.211321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-09 02:31:01.211327 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:31:01.211334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-09 02:31:01.211340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-09 02:31:01.211346 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:31:01.211353 | orchestrator | 2026-04-09 02:31:01.211360 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-09 02:31:01.211367 | orchestrator | Thursday 09 April 2026 02:30:54 +0000 (0:00:01.892) 0:01:20.125 ******** 2026-04-09 02:31:01.211374 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:31:01.211380 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:31:01.211387 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:31:01.211394 | orchestrator | 2026-04-09 02:31:01.211403 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-09 02:31:01.211500 | orchestrator | Thursday 09 April 2026 02:30:55 +0000 (0:00:00.456) 0:01:20.581 ******** 2026-04-09 02:31:01.211508 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:31:01.211514 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:31:01.211521 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:31:01.211528 | orchestrator | 2026-04-09 02:31:01.211534 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-09 02:31:01.211541 | orchestrator | Thursday 09 April 2026 02:30:56 +0000 (0:00:01.385) 0:01:21.967 ******** 2026-04-09 02:31:01.211547 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:31:01.211555 | orchestrator | 2026-04-09 02:31:01.211561 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-09 02:31:01.211568 | orchestrator | Thursday 09 April 2026 02:30:57 +0000 (0:00:00.975) 0:01:22.942 ******** 2026-04-09 02:31:01.211580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 02:31:01.211596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 02:31:01.211604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 02:31:01.211614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 02:31:01.211627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 02:31:01.895069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 02:31:01.895231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 02:31:01.895297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 02:31:01.895319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 02:31:01.895341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 02:31:01.895388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 02:31:01.895487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 02:31:01.895528 | orchestrator | 2026-04-09 02:31:01.895560 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-09 02:31:01.895584 | orchestrator | Thursday 09 April 2026 02:31:01 +0000 (0:00:03.749) 0:01:26.692 ******** 2026-04-09 02:31:01.895606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-09 02:31:01.895628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 02:31:01.895650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 02:31:01.895672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 02:31:01.895691 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:31:01.895723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-09 02:31:08.506635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 02:31:08.506745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 02:31:08.506762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 02:31:08.506775 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:31:08.506790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-09 02:31:08.506803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 02:31:08.506867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 02:31:08.506881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 02:31:08.506893 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:31:08.506904 | orchestrator | 2026-04-09 02:31:08.506916 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-09 02:31:08.506929 | orchestrator | Thursday 09 April 2026 02:31:01 +0000 (0:00:00.711) 0:01:27.403 ******** 2026-04-09 02:31:08.506941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-09 02:31:08.506953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-09 02:31:08.506965 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:31:08.506977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-09 02:31:08.506988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-09 02:31:08.506999 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:31:08.507010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-09 02:31:08.507021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-09 02:31:08.507032 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:31:08.507043 | orchestrator | 2026-04-09 02:31:08.507057 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-09 02:31:08.507069 | orchestrator | Thursday 09 April 2026 02:31:03 +0000 (0:00:01.220) 0:01:28.623 ******** 2026-04-09 02:31:08.507082 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:31:08.507104 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:31:08.507117 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:31:08.507129 | orchestrator | 2026-04-09 02:31:08.507142 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-09 02:31:08.507155 | orchestrator | Thursday 09 April 2026 02:31:04 +0000 (0:00:01.326) 0:01:29.950 ******** 2026-04-09 02:31:08.507167 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:31:08.507181 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:31:08.507194 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:31:08.507207 | orchestrator | 2026-04-09 02:31:08.507220 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-09 02:31:08.507233 | orchestrator | Thursday 09 April 2026 02:31:06 +0000 (0:00:02.167) 0:01:32.117 ******** 2026-04-09 02:31:08.507246 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:31:08.507259 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:31:08.507271 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:31:08.507284 | orchestrator | 2026-04-09 02:31:08.507296 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-09 02:31:08.507310 | orchestrator | Thursday 09 April 2026 02:31:07 +0000 (0:00:00.333) 0:01:32.451 ******** 2026-04-09 02:31:08.507324 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:31:08.507336 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:31:08.507350 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:31:08.507362 | orchestrator | 2026-04-09 02:31:08.507376 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-09 02:31:08.507388 | orchestrator | Thursday 09 April 2026 02:31:07 +0000 (0:00:00.326) 0:01:32.778 ******** 2026-04-09 02:31:08.507402 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:31:08.507453 | orchestrator | 2026-04-09 02:31:08.507471 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-09 02:31:08.507489 | orchestrator | Thursday 09 April 2026 02:31:08 +0000 (0:00:01.134) 0:01:33.913 ******** 2026-04-09 02:31:12.594554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 02:31:12.594654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 02:31:12.594665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 02:31:12.594699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 02:31:12.594707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 02:31:12.594714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 02:31:12.594741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 02:31:12.594749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 02:31:12.594755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 02:31:12.594768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 02:31:12.594775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 02:31:12.594781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 02:31:12.594797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 02:31:13.597204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 02:31:13.597277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 02:31:13.597300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 02:31:13.597306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 02:31:13.597310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 02:31:13.597325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 02:31:13.597339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 02:31:13.597344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 02:31:13.597351 | orchestrator | 2026-04-09 02:31:13.597357 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-09 02:31:13.597361 | orchestrator | Thursday 09 April 2026 02:31:12 +0000 (0:00:04.336) 0:01:38.249 ******** 2026-04-09 02:31:13.597365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 02:31:13.597369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 02:31:13.597373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 02:31:13.597377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 02:31:13.597386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 02:31:14.003779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 02:31:14.003983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 02:31:14.003997 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:31:14.004009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 02:31:14.004017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 02:31:14.004529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 02:31:14.004579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 02:31:14.004609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 02:31:14.004630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 02:31:14.004641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 02:31:14.004648 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:31:14.004657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 02:31:14.004664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 02:31:14.004671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 02:31:14.004689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 02:31:24.753080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 02:31:24.753225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 02:31:24.753258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 02:31:24.753276 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:31:24.753289 | orchestrator | 2026-04-09 02:31:24.753301 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-09 02:31:24.753314 | orchestrator | Thursday 09 April 2026 02:31:13 +0000 (0:00:01.156) 0:01:39.405 ******** 2026-04-09 02:31:24.753326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-09 02:31:24.753339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-09 02:31:24.753351 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:31:24.753391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-09 02:31:24.753437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-09 02:31:24.753449 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:31:24.753460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-09 02:31:24.753509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-09 02:31:24.753522 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:31:24.753532 | orchestrator | 2026-04-09 02:31:24.753543 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-09 02:31:24.753554 | orchestrator | Thursday 09 April 2026 02:31:15 +0000 (0:00:01.364) 0:01:40.770 ******** 2026-04-09 02:31:24.753566 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:31:24.753577 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:31:24.753588 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:31:24.753601 | orchestrator | 2026-04-09 02:31:24.753613 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-09 02:31:24.753626 | orchestrator | Thursday 09 April 2026 02:31:16 +0000 (0:00:01.303) 0:01:42.073 ******** 2026-04-09 02:31:24.753639 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:31:24.753651 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:31:24.753663 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:31:24.753675 | orchestrator | 2026-04-09 02:31:24.753688 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-09 02:31:24.753700 | orchestrator | Thursday 09 April 2026 02:31:18 +0000 (0:00:02.176) 0:01:44.250 ******** 2026-04-09 02:31:24.753731 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:31:24.753744 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:31:24.753756 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:31:24.753769 | orchestrator | 2026-04-09 02:31:24.753781 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-09 02:31:24.753793 | orchestrator | Thursday 09 April 2026 02:31:19 +0000 (0:00:00.334) 0:01:44.584 ******** 2026-04-09 02:31:24.753805 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:31:24.753819 | orchestrator | 2026-04-09 02:31:24.753831 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-09 02:31:24.753844 | orchestrator | Thursday 09 April 2026 02:31:20 +0000 (0:00:01.141) 0:01:45.725 ******** 2026-04-09 02:31:24.753865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 02:31:24.753881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 02:31:24.753920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 02:31:28.073764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 02:31:28.073909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 02:31:28.073949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 02:31:28.073968 | orchestrator | 2026-04-09 02:31:28.073979 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-09 02:31:28.073990 | orchestrator | Thursday 09 April 2026 02:31:24 +0000 (0:00:04.605) 0:01:50.331 ******** 2026-04-09 02:31:28.074007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 02:31:28.074115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 02:31:32.510266 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:31:32.510449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 02:31:32.510501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 02:31:32.510546 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:31:32.510582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 02:31:32.510598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 02:31:32.510615 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:31:32.510624 | orchestrator | 2026-04-09 02:31:32.510633 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-09 02:31:32.510642 | orchestrator | Thursday 09 April 2026 02:31:28 +0000 (0:00:03.276) 0:01:53.607 ******** 2026-04-09 02:31:32.510651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 02:31:32.510667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 02:31:41.356881 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:31:41.356988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 02:31:41.357005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 02:31:41.357019 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:31:41.357035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 02:31:41.357068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 02:31:41.357085 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:31:41.357102 | orchestrator | 2026-04-09 02:31:41.357119 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-09 02:31:41.357160 | orchestrator | Thursday 09 April 2026 02:31:32 +0000 (0:00:04.308) 0:01:57.916 ******** 2026-04-09 02:31:41.357171 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:31:41.357194 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:31:41.357202 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:31:41.357211 | orchestrator | 2026-04-09 02:31:41.357220 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-09 02:31:41.357229 | orchestrator | Thursday 09 April 2026 02:31:33 +0000 (0:00:01.306) 0:01:59.222 ******** 2026-04-09 02:31:41.357238 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:31:41.357246 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:31:41.357255 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:31:41.357264 | orchestrator | 2026-04-09 02:31:41.357272 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-09 02:31:41.357285 | orchestrator | Thursday 09 April 2026 02:31:35 +0000 (0:00:02.147) 0:02:01.369 ******** 2026-04-09 02:31:41.357304 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:31:41.357328 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:31:41.357343 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:31:41.357359 | orchestrator | 2026-04-09 02:31:41.357374 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-09 02:31:41.357413 | orchestrator | Thursday 09 April 2026 02:31:36 +0000 (0:00:00.322) 0:02:01.692 ******** 2026-04-09 02:31:41.357428 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:31:41.357442 | orchestrator | 2026-04-09 02:31:41.357458 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-09 02:31:41.357472 | orchestrator | Thursday 09 April 2026 02:31:37 +0000 (0:00:01.179) 0:02:02.872 ******** 2026-04-09 02:31:41.357510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 02:31:41.357531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 02:31:41.357542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 02:31:41.357552 | orchestrator | 2026-04-09 02:31:41.357563 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-09 02:31:41.357584 | orchestrator | Thursday 09 April 2026 02:31:40 +0000 (0:00:03.184) 0:02:06.056 ******** 2026-04-09 02:31:41.357595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-09 02:31:41.357607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-09 02:31:41.357617 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:31:41.357627 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:31:41.357638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-09 02:31:41.357714 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:31:41.357733 | orchestrator | 2026-04-09 02:31:41.357743 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-09 02:31:41.357754 | orchestrator | Thursday 09 April 2026 02:31:41 +0000 (0:00:00.432) 0:02:06.489 ******** 2026-04-09 02:31:41.357765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-09 02:31:41.357786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-09 02:31:50.522183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-09 02:31:50.522276 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:31:50.522287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-09 02:31:50.522297 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:31:50.522304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-09 02:31:50.522312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-09 02:31:50.522339 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:31:50.522347 | orchestrator | 2026-04-09 02:31:50.522355 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-09 02:31:50.522364 | orchestrator | Thursday 09 April 2026 02:31:42 +0000 (0:00:01.033) 0:02:07.522 ******** 2026-04-09 02:31:50.522371 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:31:50.522378 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:31:50.522512 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:31:50.522522 | orchestrator | 2026-04-09 02:31:50.522529 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-09 02:31:50.522537 | orchestrator | Thursday 09 April 2026 02:31:43 +0000 (0:00:01.344) 0:02:08.867 ******** 2026-04-09 02:31:50.522545 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:31:50.522556 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:31:50.522568 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:31:50.522580 | orchestrator | 2026-04-09 02:31:50.522592 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-09 02:31:50.522620 | orchestrator | Thursday 09 April 2026 02:31:45 +0000 (0:00:02.213) 0:02:11.081 ******** 2026-04-09 02:31:50.522631 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:31:50.522643 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:31:50.522655 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:31:50.522668 | orchestrator | 2026-04-09 02:31:50.522681 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-09 02:31:50.522693 | orchestrator | Thursday 09 April 2026 02:31:45 +0000 (0:00:00.323) 0:02:11.404 ******** 2026-04-09 02:31:50.522706 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:31:50.522719 | orchestrator | 2026-04-09 02:31:50.522732 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-09 02:31:50.522745 | orchestrator | Thursday 09 April 2026 02:31:47 +0000 (0:00:01.193) 0:02:12.598 ******** 2026-04-09 02:31:50.522785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 02:31:50.522815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 02:31:50.522834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 02:31:52.236029 | orchestrator | 2026-04-09 02:31:52.236148 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-09 02:31:52.236171 | orchestrator | Thursday 09 April 2026 02:31:50 +0000 (0:00:03.332) 0:02:15.931 ******** 2026-04-09 02:31:52.236214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 02:31:52.236237 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:31:52.236278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 02:31:52.236321 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:31:52.236339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 02:31:52.236349 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:31:52.236358 | orchestrator | 2026-04-09 02:31:52.236367 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-09 02:31:52.236376 | orchestrator | Thursday 09 April 2026 02:31:51 +0000 (0:00:00.726) 0:02:16.657 ******** 2026-04-09 02:31:52.236418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-09 02:31:52.236438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 02:31:52.236450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-09 02:31:52.236467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 02:32:01.484226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-09 02:32:01.484308 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:32:01.484317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-09 02:32:01.484325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 02:32:01.484344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-09 02:32:01.484351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 02:32:01.484361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-09 02:32:01.484368 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:32:01.484375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-09 02:32:01.484431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 02:32:01.484440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-09 02:32:01.484467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 02:32:01.484475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-09 02:32:01.484481 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:32:01.484488 | orchestrator | 2026-04-09 02:32:01.484496 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-09 02:32:01.484504 | orchestrator | Thursday 09 April 2026 02:31:52 +0000 (0:00:00.986) 0:02:17.644 ******** 2026-04-09 02:32:01.484512 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:32:01.484519 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:32:01.484526 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:32:01.484532 | orchestrator | 2026-04-09 02:32:01.484539 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-09 02:32:01.484546 | orchestrator | Thursday 09 April 2026 02:31:53 +0000 (0:00:01.699) 0:02:19.344 ******** 2026-04-09 02:32:01.484553 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:32:01.484561 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:32:01.484568 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:32:01.484574 | orchestrator | 2026-04-09 02:32:01.484582 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-09 02:32:01.484589 | orchestrator | Thursday 09 April 2026 02:31:56 +0000 (0:00:02.138) 0:02:21.483 ******** 2026-04-09 02:32:01.484595 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:32:01.484602 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:32:01.484624 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:32:01.484631 | orchestrator | 2026-04-09 02:32:01.484639 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-09 02:32:01.484647 | orchestrator | Thursday 09 April 2026 02:31:56 +0000 (0:00:00.325) 0:02:21.808 ******** 2026-04-09 02:32:01.484654 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:32:01.484661 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:32:01.484668 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:32:01.484675 | orchestrator | 2026-04-09 02:32:01.484681 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-09 02:32:01.484688 | orchestrator | Thursday 09 April 2026 02:31:56 +0000 (0:00:00.360) 0:02:22.169 ******** 2026-04-09 02:32:01.484693 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:32:01.484697 | orchestrator | 2026-04-09 02:32:01.484701 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-09 02:32:01.484705 | orchestrator | Thursday 09 April 2026 02:31:58 +0000 (0:00:01.259) 0:02:23.428 ******** 2026-04-09 02:32:01.484721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 02:32:01.484746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 02:32:01.484752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 02:32:01.484764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 02:32:01.484774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 02:32:02.138193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 02:32:02.138300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 02:32:02.138337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 02:32:02.138349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 02:32:02.138357 | orchestrator | 2026-04-09 02:32:02.138368 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-09 02:32:02.138430 | orchestrator | Thursday 09 April 2026 02:32:01 +0000 (0:00:03.459) 0:02:26.888 ******** 2026-04-09 02:32:02.138463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-09 02:32:02.138481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 02:32:02.138491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 02:32:02.138510 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:32:02.138522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-09 02:32:02.138532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 02:32:02.138542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 02:32:02.138550 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:32:02.138572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-09 02:32:11.672737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 02:32:11.672864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 02:32:11.672890 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:32:11.672909 | orchestrator | 2026-04-09 02:32:11.672921 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-09 02:32:11.672931 | orchestrator | Thursday 09 April 2026 02:32:02 +0000 (0:00:00.649) 0:02:27.537 ******** 2026-04-09 02:32:11.672943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-09 02:32:11.672955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-09 02:32:11.672966 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:32:11.672975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-09 02:32:11.672985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-09 02:32:11.672994 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:32:11.673003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-09 02:32:11.673012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-09 02:32:11.673020 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:32:11.673029 | orchestrator | 2026-04-09 02:32:11.673038 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-09 02:32:11.673047 | orchestrator | Thursday 09 April 2026 02:32:03 +0000 (0:00:01.145) 0:02:28.683 ******** 2026-04-09 02:32:11.673056 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:32:11.673064 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:32:11.673132 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:32:11.673149 | orchestrator | 2026-04-09 02:32:11.673164 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-09 02:32:11.673180 | orchestrator | Thursday 09 April 2026 02:32:04 +0000 (0:00:01.297) 0:02:29.980 ******** 2026-04-09 02:32:11.673195 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:32:11.673211 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:32:11.673223 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:32:11.673232 | orchestrator | 2026-04-09 02:32:11.673243 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-09 02:32:11.673253 | orchestrator | Thursday 09 April 2026 02:32:06 +0000 (0:00:02.077) 0:02:32.057 ******** 2026-04-09 02:32:11.673263 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:32:11.673286 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:32:11.673297 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:32:11.673307 | orchestrator | 2026-04-09 02:32:11.673317 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-09 02:32:11.673346 | orchestrator | Thursday 09 April 2026 02:32:06 +0000 (0:00:00.337) 0:02:32.394 ******** 2026-04-09 02:32:11.673357 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:32:11.673366 | orchestrator | 2026-04-09 02:32:11.673405 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-09 02:32:11.673416 | orchestrator | Thursday 09 April 2026 02:32:08 +0000 (0:00:01.277) 0:02:33.672 ******** 2026-04-09 02:32:11.673428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 02:32:11.673442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 02:32:11.673454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 02:32:11.673473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 02:32:11.673494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 02:32:17.217877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 02:32:17.217962 | orchestrator | 2026-04-09 02:32:17.217973 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-09 02:32:17.217981 | orchestrator | Thursday 09 April 2026 02:32:11 +0000 (0:00:03.405) 0:02:37.077 ******** 2026-04-09 02:32:17.217990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-09 02:32:17.218071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 02:32:17.218101 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:32:17.218113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-09 02:32:17.218133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 02:32:17.218140 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:32:17.218146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-09 02:32:17.218153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 02:32:17.218165 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:32:17.218171 | orchestrator | 2026-04-09 02:32:17.218178 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-09 02:32:17.218184 | orchestrator | Thursday 09 April 2026 02:32:12 +0000 (0:00:00.693) 0:02:37.771 ******** 2026-04-09 02:32:17.218192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-09 02:32:17.218200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-09 02:32:17.218208 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:32:17.218215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-09 02:32:17.218221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-09 02:32:17.218227 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:32:17.218234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-09 02:32:17.218240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-09 02:32:17.218246 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:32:17.218252 | orchestrator | 2026-04-09 02:32:17.218267 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-09 02:32:17.218274 | orchestrator | Thursday 09 April 2026 02:32:13 +0000 (0:00:01.074) 0:02:38.845 ******** 2026-04-09 02:32:17.218280 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:32:17.218287 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:32:17.218293 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:32:17.218299 | orchestrator | 2026-04-09 02:32:17.218305 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-09 02:32:17.218312 | orchestrator | Thursday 09 April 2026 02:32:15 +0000 (0:00:01.689) 0:02:40.535 ******** 2026-04-09 02:32:17.218318 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:32:17.218324 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:32:17.218330 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:32:17.218336 | orchestrator | 2026-04-09 02:32:17.218343 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-09 02:32:17.218353 | orchestrator | Thursday 09 April 2026 02:32:17 +0000 (0:00:02.081) 0:02:42.616 ******** 2026-04-09 02:32:21.838612 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:32:21.838718 | orchestrator | 2026-04-09 02:32:21.838735 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-09 02:32:21.838747 | orchestrator | Thursday 09 April 2026 02:32:18 +0000 (0:00:01.081) 0:02:43.698 ******** 2026-04-09 02:32:21.838762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-09 02:32:21.838803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 02:32:21.838816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 02:32:21.838829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-09 02:32:21.838855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 02:32:21.838886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 02:32:21.838898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 02:32:21.838917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 02:32:21.838928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-09 02:32:21.838939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 02:32:21.838956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 02:32:21.838976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 02:32:22.859817 | orchestrator | 2026-04-09 02:32:22.859922 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-09 02:32:22.859939 | orchestrator | Thursday 09 April 2026 02:32:21 +0000 (0:00:03.638) 0:02:47.336 ******** 2026-04-09 02:32:22.859976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-09 02:32:22.859991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 02:32:22.860003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 02:32:22.860015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 02:32:22.860039 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:32:22.860065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-09 02:32:22.860094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 02:32:22.860113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 02:32:22.860124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 02:32:22.860134 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:32:22.860145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-09 02:32:22.860161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 02:32:22.860171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 02:32:22.860190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 02:32:34.640108 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:32:34.640193 | orchestrator | 2026-04-09 02:32:34.640204 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-09 02:32:34.640211 | orchestrator | Thursday 09 April 2026 02:32:22 +0000 (0:00:01.026) 0:02:48.362 ******** 2026-04-09 02:32:34.640219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-09 02:32:34.640227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-09 02:32:34.640235 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:32:34.640242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-09 02:32:34.640249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-09 02:32:34.640255 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:32:34.640262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-09 02:32:34.640268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-09 02:32:34.640274 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:32:34.640280 | orchestrator | 2026-04-09 02:32:34.640286 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-09 02:32:34.640293 | orchestrator | Thursday 09 April 2026 02:32:23 +0000 (0:00:00.950) 0:02:49.312 ******** 2026-04-09 02:32:34.640299 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:32:34.640305 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:32:34.640311 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:32:34.640317 | orchestrator | 2026-04-09 02:32:34.640324 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-09 02:32:34.640330 | orchestrator | Thursday 09 April 2026 02:32:25 +0000 (0:00:01.403) 0:02:50.716 ******** 2026-04-09 02:32:34.640336 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:32:34.640342 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:32:34.640348 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:32:34.640354 | orchestrator | 2026-04-09 02:32:34.640360 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-09 02:32:34.640414 | orchestrator | Thursday 09 April 2026 02:32:27 +0000 (0:00:02.168) 0:02:52.885 ******** 2026-04-09 02:32:34.640421 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:32:34.640427 | orchestrator | 2026-04-09 02:32:34.640434 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-09 02:32:34.640440 | orchestrator | Thursday 09 April 2026 02:32:28 +0000 (0:00:01.472) 0:02:54.357 ******** 2026-04-09 02:32:34.640447 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 02:32:34.640453 | orchestrator | 2026-04-09 02:32:34.640480 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-09 02:32:34.640486 | orchestrator | Thursday 09 April 2026 02:32:32 +0000 (0:00:03.146) 0:02:57.504 ******** 2026-04-09 02:32:34.640522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 02:32:34.640533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 02:32:34.640540 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:32:34.640551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 02:32:34.640563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 02:32:34.640570 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:32:34.640582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 02:32:37.264999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 02:32:37.265094 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:32:37.265108 | orchestrator | 2026-04-09 02:32:37.265118 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-09 02:32:37.265129 | orchestrator | Thursday 09 April 2026 02:32:34 +0000 (0:00:02.539) 0:03:00.044 ******** 2026-04-09 02:32:37.265206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 02:32:37.265228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 02:32:37.265243 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:32:37.265281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 02:32:37.265323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 02:32:37.265340 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:32:37.265358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 02:32:37.265443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 02:32:47.494882 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:32:47.495015 | orchestrator | 2026-04-09 02:32:47.495045 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-09 02:32:47.495068 | orchestrator | Thursday 09 April 2026 02:32:37 +0000 (0:00:02.627) 0:03:02.671 ******** 2026-04-09 02:32:47.495091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 02:32:47.495167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 02:32:47.495192 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:32:47.495210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 02:32:47.495222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 02:32:47.495234 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:32:47.495245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 02:32:47.495256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 02:32:47.495268 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:32:47.495279 | orchestrator | 2026-04-09 02:32:47.495290 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-09 02:32:47.495301 | orchestrator | Thursday 09 April 2026 02:32:40 +0000 (0:00:03.060) 0:03:05.732 ******** 2026-04-09 02:32:47.495312 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:32:47.495382 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:32:47.495399 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:32:47.495411 | orchestrator | 2026-04-09 02:32:47.495425 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-09 02:32:47.495437 | orchestrator | Thursday 09 April 2026 02:32:42 +0000 (0:00:02.141) 0:03:07.873 ******** 2026-04-09 02:32:47.495450 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:32:47.495463 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:32:47.495476 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:32:47.495488 | orchestrator | 2026-04-09 02:32:47.495500 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-09 02:32:47.495514 | orchestrator | Thursday 09 April 2026 02:32:44 +0000 (0:00:01.593) 0:03:09.466 ******** 2026-04-09 02:32:47.495526 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:32:47.495539 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:32:47.495552 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:32:47.495564 | orchestrator | 2026-04-09 02:32:47.495577 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-09 02:32:47.495590 | orchestrator | Thursday 09 April 2026 02:32:44 +0000 (0:00:00.323) 0:03:09.790 ******** 2026-04-09 02:32:47.495602 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:32:47.495616 | orchestrator | 2026-04-09 02:32:47.495628 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-09 02:32:47.495641 | orchestrator | Thursday 09 April 2026 02:32:45 +0000 (0:00:01.392) 0:03:11.183 ******** 2026-04-09 02:32:47.495662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-09 02:32:47.495680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-09 02:32:47.495694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-09 02:32:47.495707 | orchestrator | 2026-04-09 02:32:47.495720 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-09 02:32:47.495741 | orchestrator | Thursday 09 April 2026 02:32:47 +0000 (0:00:01.505) 0:03:12.689 ******** 2026-04-09 02:32:47.495763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-09 02:32:56.558387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-09 02:32:56.558509 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:32:56.558523 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:32:56.558533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-09 02:32:56.558546 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:32:56.558560 | orchestrator | 2026-04-09 02:32:56.558575 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-09 02:32:56.558593 | orchestrator | Thursday 09 April 2026 02:32:47 +0000 (0:00:00.409) 0:03:13.098 ******** 2026-04-09 02:32:56.558615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-09 02:32:56.558629 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:32:56.558642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-09 02:32:56.558654 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:32:56.558666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-09 02:32:56.558706 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:32:56.558719 | orchestrator | 2026-04-09 02:32:56.558775 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-09 02:32:56.558788 | orchestrator | Thursday 09 April 2026 02:32:48 +0000 (0:00:00.963) 0:03:14.062 ******** 2026-04-09 02:32:56.558801 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:32:56.558813 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:32:56.558824 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:32:56.558837 | orchestrator | 2026-04-09 02:32:56.558849 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-09 02:32:56.558862 | orchestrator | Thursday 09 April 2026 02:32:49 +0000 (0:00:00.512) 0:03:14.575 ******** 2026-04-09 02:32:56.558875 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:32:56.558888 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:32:56.558902 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:32:56.558915 | orchestrator | 2026-04-09 02:32:56.558928 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-09 02:32:56.558942 | orchestrator | Thursday 09 April 2026 02:32:50 +0000 (0:00:01.391) 0:03:15.967 ******** 2026-04-09 02:32:56.558957 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:32:56.558979 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:32:56.558991 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:32:56.559004 | orchestrator | 2026-04-09 02:32:56.559017 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-09 02:32:56.559030 | orchestrator | Thursday 09 April 2026 02:32:50 +0000 (0:00:00.374) 0:03:16.341 ******** 2026-04-09 02:32:56.559044 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:32:56.559056 | orchestrator | 2026-04-09 02:32:56.559069 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-09 02:32:56.559082 | orchestrator | Thursday 09 April 2026 02:32:52 +0000 (0:00:01.603) 0:03:17.945 ******** 2026-04-09 02:32:56.559118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 02:32:56.559143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:56.559161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:56.559191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:56.559206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-09 02:32:56.559227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:56.673330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 02:32:56.673511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 02:32:56.673529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:56.673560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 02:32:56.673572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 02:32:56.673607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:56.673633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:56.673658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:56.673687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-09 02:32:56.673702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 02:32:56.673718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:56.673762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:56.673789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-09 02:32:56.788440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 02:32:56.788546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:56.788558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 02:32:56.788566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 02:32:56.788573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:56.788599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 02:32:56.788615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:56.788622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 02:32:56.788629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:56.788636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:56.788643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-09 02:32:56.788659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 02:32:57.009145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:57.009242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:57.009261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 02:32:57.009276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-09 02:32:57.009289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 02:32:57.009302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 02:32:57.009411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:57.009444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:57.009458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 02:32:57.009474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 02:32:57.009486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 02:32:57.009503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:57.009530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-09 02:32:58.141288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 02:32:58.141380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:58.141397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 02:32:58.141410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 02:32:58.141418 | orchestrator | 2026-04-09 02:32:58.141428 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-09 02:32:58.141457 | orchestrator | Thursday 09 April 2026 02:32:56 +0000 (0:00:04.471) 0:03:22.417 ******** 2026-04-09 02:32:58.141474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 02:32:58.141494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:58.141502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:58.141511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:58.141519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-09 02:32:58.141537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:58.141552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 02:32:58.252001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 02:32:58.252104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:58.252119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 02:32:58.252128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:58.252177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:58.252186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:58.252203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 02:32:58.252207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-09 02:32:58.252212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:58.252221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:58.252228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-09 02:32:58.252233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 02:32:58.252242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 02:32:58.376394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 02:32:58.376478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:58.376487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:58.376512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 02:32:58.376529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 02:32:58.376546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 02:32:58.376551 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:32:58.376556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:58.376560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-09 02:32:58.376569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 02:32:58.376575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 02:32:58.376579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:58.376588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:58.741934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 02:32:58.742082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:58.742127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 02:32:58.742149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:58.742166 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:32:58.742183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-09 02:32:58.742215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:58.742225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 02:32:58.742240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 02:32:58.742249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:58.742260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 02:32:58.742268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 02:32:58.742286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-09 02:33:09.406784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 02:33:09.406876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 02:33:09.406908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 02:33:09.406923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 02:33:09.406931 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:33:09.406939 | orchestrator | 2026-04-09 02:33:09.406947 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-09 02:33:09.406955 | orchestrator | Thursday 09 April 2026 02:32:58 +0000 (0:00:01.729) 0:03:24.146 ******** 2026-04-09 02:33:09.406962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-09 02:33:09.406968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-09 02:33:09.406973 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:33:09.406977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-09 02:33:09.406981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-09 02:33:09.406985 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:33:09.406998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-09 02:33:09.407011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-09 02:33:09.407020 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:33:09.407023 | orchestrator | 2026-04-09 02:33:09.407028 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-09 02:33:09.407035 | orchestrator | Thursday 09 April 2026 02:33:00 +0000 (0:00:02.185) 0:03:26.332 ******** 2026-04-09 02:33:09.407041 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:33:09.407048 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:33:09.407054 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:33:09.407060 | orchestrator | 2026-04-09 02:33:09.407067 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-09 02:33:09.407073 | orchestrator | Thursday 09 April 2026 02:33:02 +0000 (0:00:01.401) 0:03:27.733 ******** 2026-04-09 02:33:09.407080 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:33:09.407086 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:33:09.407092 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:33:09.407098 | orchestrator | 2026-04-09 02:33:09.407105 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-09 02:33:09.407111 | orchestrator | Thursday 09 April 2026 02:33:04 +0000 (0:00:02.187) 0:03:29.921 ******** 2026-04-09 02:33:09.407118 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:33:09.407125 | orchestrator | 2026-04-09 02:33:09.407132 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-09 02:33:09.407140 | orchestrator | Thursday 09 April 2026 02:33:05 +0000 (0:00:01.291) 0:03:31.213 ******** 2026-04-09 02:33:09.407148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 02:33:09.407160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 02:33:09.407164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 02:33:09.407172 | orchestrator | 2026-04-09 02:33:09.407176 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-09 02:33:09.407185 | orchestrator | Thursday 09 April 2026 02:33:09 +0000 (0:00:03.591) 0:03:34.804 ******** 2026-04-09 02:33:20.801511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-09 02:33:20.801621 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:33:20.801647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-09 02:33:20.801665 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:33:20.801698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-09 02:33:20.801717 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:33:20.801735 | orchestrator | 2026-04-09 02:33:20.801753 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-09 02:33:20.801771 | orchestrator | Thursday 09 April 2026 02:33:09 +0000 (0:00:00.550) 0:03:35.355 ******** 2026-04-09 02:33:20.801865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-09 02:33:20.801909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-09 02:33:20.801928 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:33:20.801946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-09 02:33:20.801959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-09 02:33:20.801969 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:33:20.801998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-09 02:33:20.802008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-09 02:33:20.802069 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:33:20.802081 | orchestrator | 2026-04-09 02:33:20.802093 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-09 02:33:20.802104 | orchestrator | Thursday 09 April 2026 02:33:10 +0000 (0:00:01.053) 0:03:36.409 ******** 2026-04-09 02:33:20.802116 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:33:20.802127 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:33:20.802138 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:33:20.802150 | orchestrator | 2026-04-09 02:33:20.802193 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-09 02:33:20.802204 | orchestrator | Thursday 09 April 2026 02:33:13 +0000 (0:00:02.086) 0:03:38.495 ******** 2026-04-09 02:33:20.802215 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:33:20.802227 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:33:20.802238 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:33:20.802249 | orchestrator | 2026-04-09 02:33:20.802260 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-09 02:33:20.802271 | orchestrator | Thursday 09 April 2026 02:33:15 +0000 (0:00:01.934) 0:03:40.429 ******** 2026-04-09 02:33:20.802283 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:33:20.802294 | orchestrator | 2026-04-09 02:33:20.802305 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-09 02:33:20.802316 | orchestrator | Thursday 09 April 2026 02:33:16 +0000 (0:00:01.747) 0:03:42.177 ******** 2026-04-09 02:33:20.802331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 02:33:20.802387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 02:33:20.802402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 02:33:20.802426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 02:33:21.814406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 02:33:21.814490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 02:33:21.814529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 02:33:21.814540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 02:33:21.814548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 02:33:21.814557 | orchestrator | 2026-04-09 02:33:21.814566 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-09 02:33:21.814575 | orchestrator | Thursday 09 April 2026 02:33:20 +0000 (0:00:04.029) 0:03:46.206 ******** 2026-04-09 02:33:21.814598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-09 02:33:21.814613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 02:33:21.814625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 02:33:21.814633 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:33:21.814643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-09 02:33:21.814657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 02:33:31.908737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 02:33:31.908807 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:33:31.908829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-09 02:33:31.908853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 02:33:31.908861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 02:33:31.908868 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:33:31.908875 | orchestrator | 2026-04-09 02:33:31.908883 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-09 02:33:31.908890 | orchestrator | Thursday 09 April 2026 02:33:21 +0000 (0:00:01.006) 0:03:47.213 ******** 2026-04-09 02:33:31.908898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-09 02:33:31.908907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-09 02:33:31.908915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-09 02:33:31.908931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-09 02:33:31.908939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-09 02:33:31.908946 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:33:31.908953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-09 02:33:31.908964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-09 02:33:31.908971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-09 02:33:31.908978 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:33:31.908985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-09 02:33:31.908991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-09 02:33:31.909001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-09 02:33:31.909008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-09 02:33:31.909014 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:33:31.909020 | orchestrator | 2026-04-09 02:33:31.909027 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-09 02:33:31.909033 | orchestrator | Thursday 09 April 2026 02:33:22 +0000 (0:00:00.865) 0:03:48.078 ******** 2026-04-09 02:33:31.909039 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:33:31.909046 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:33:31.909052 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:33:31.909059 | orchestrator | 2026-04-09 02:33:31.909065 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-09 02:33:31.909072 | orchestrator | Thursday 09 April 2026 02:33:24 +0000 (0:00:01.341) 0:03:49.420 ******** 2026-04-09 02:33:31.909078 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:33:31.909084 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:33:31.909091 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:33:31.909098 | orchestrator | 2026-04-09 02:33:31.909104 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-09 02:33:31.909111 | orchestrator | Thursday 09 April 2026 02:33:25 +0000 (0:00:01.969) 0:03:51.390 ******** 2026-04-09 02:33:31.909117 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:33:31.909123 | orchestrator | 2026-04-09 02:33:31.909130 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-09 02:33:31.909137 | orchestrator | Thursday 09 April 2026 02:33:27 +0000 (0:00:01.447) 0:03:52.837 ******** 2026-04-09 02:33:31.909144 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-09 02:33:31.909152 | orchestrator | 2026-04-09 02:33:31.909159 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-09 02:33:31.909166 | orchestrator | Thursday 09 April 2026 02:33:28 +0000 (0:00:00.807) 0:03:53.645 ******** 2026-04-09 02:33:31.909173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-09 02:33:31.909190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-09 02:33:44.006694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-09 02:33:44.006796 | orchestrator | 2026-04-09 02:33:44.006808 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-09 02:33:44.006821 | orchestrator | Thursday 09 April 2026 02:33:31 +0000 (0:00:03.671) 0:03:57.316 ******** 2026-04-09 02:33:44.006835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 02:33:44.006846 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:33:44.006873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 02:33:44.006883 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:33:44.006893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 02:33:44.006902 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:33:44.006909 | orchestrator | 2026-04-09 02:33:44.006917 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-09 02:33:44.006926 | orchestrator | Thursday 09 April 2026 02:33:33 +0000 (0:00:01.314) 0:03:58.630 ******** 2026-04-09 02:33:44.006936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 02:33:44.006947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 02:33:44.006978 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:33:44.006988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 02:33:44.006996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 02:33:44.007006 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:33:44.007015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 02:33:44.007024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 02:33:44.007050 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:33:44.007056 | orchestrator | 2026-04-09 02:33:44.007062 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-09 02:33:44.007067 | orchestrator | Thursday 09 April 2026 02:33:34 +0000 (0:00:01.586) 0:04:00.217 ******** 2026-04-09 02:33:44.007073 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:33:44.007078 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:33:44.007084 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:33:44.007089 | orchestrator | 2026-04-09 02:33:44.007095 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-09 02:33:44.007100 | orchestrator | Thursday 09 April 2026 02:33:37 +0000 (0:00:02.556) 0:04:02.773 ******** 2026-04-09 02:33:44.007105 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:33:44.007111 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:33:44.007116 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:33:44.007121 | orchestrator | 2026-04-09 02:33:44.007127 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-09 02:33:44.007132 | orchestrator | Thursday 09 April 2026 02:33:40 +0000 (0:00:02.937) 0:04:05.711 ******** 2026-04-09 02:33:44.007139 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-09 02:33:44.007146 | orchestrator | 2026-04-09 02:33:44.007152 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-09 02:33:44.007161 | orchestrator | Thursday 09 April 2026 02:33:41 +0000 (0:00:01.306) 0:04:07.017 ******** 2026-04-09 02:33:44.007177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 02:33:44.007187 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:33:44.007196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 02:33:44.007213 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:33:44.007224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 02:33:44.007233 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:33:44.007240 | orchestrator | 2026-04-09 02:33:44.007245 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-09 02:33:44.007251 | orchestrator | Thursday 09 April 2026 02:33:42 +0000 (0:00:01.105) 0:04:08.122 ******** 2026-04-09 02:33:44.007257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 02:33:44.007262 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:33:44.007268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 02:33:44.007282 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:34:08.155677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 02:34:08.155774 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:34:08.155785 | orchestrator | 2026-04-09 02:34:08.155791 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-09 02:34:08.155797 | orchestrator | Thursday 09 April 2026 02:33:43 +0000 (0:00:01.285) 0:04:09.408 ******** 2026-04-09 02:34:08.155802 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:34:08.155807 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:34:08.155811 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:34:08.155815 | orchestrator | 2026-04-09 02:34:08.155819 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-09 02:34:08.155823 | orchestrator | Thursday 09 April 2026 02:33:45 +0000 (0:00:01.584) 0:04:10.993 ******** 2026-04-09 02:34:08.155827 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:34:08.155832 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:34:08.155836 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:34:08.155839 | orchestrator | 2026-04-09 02:34:08.155843 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-09 02:34:08.155847 | orchestrator | Thursday 09 April 2026 02:33:48 +0000 (0:00:02.711) 0:04:13.704 ******** 2026-04-09 02:34:08.155874 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:34:08.155878 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:34:08.155882 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:34:08.155885 | orchestrator | 2026-04-09 02:34:08.155901 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-09 02:34:08.155909 | orchestrator | Thursday 09 April 2026 02:33:51 +0000 (0:00:02.791) 0:04:16.496 ******** 2026-04-09 02:34:08.155915 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-09 02:34:08.155922 | orchestrator | 2026-04-09 02:34:08.155929 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-09 02:34:08.155936 | orchestrator | Thursday 09 April 2026 02:33:52 +0000 (0:00:01.248) 0:04:17.744 ******** 2026-04-09 02:34:08.155943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 02:34:08.155950 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:34:08.155957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 02:34:08.155962 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:34:08.155969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 02:34:08.155978 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:34:08.155985 | orchestrator | 2026-04-09 02:34:08.155991 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-09 02:34:08.155998 | orchestrator | Thursday 09 April 2026 02:33:53 +0000 (0:00:01.386) 0:04:19.130 ******** 2026-04-09 02:34:08.156019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 02:34:08.156026 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:34:08.156032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 02:34:08.156043 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:34:08.156047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 02:34:08.156051 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:34:08.156055 | orchestrator | 2026-04-09 02:34:08.156062 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-09 02:34:08.156066 | orchestrator | Thursday 09 April 2026 02:33:55 +0000 (0:00:01.450) 0:04:20.580 ******** 2026-04-09 02:34:08.156070 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:34:08.156074 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:34:08.156077 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:34:08.156081 | orchestrator | 2026-04-09 02:34:08.156085 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-09 02:34:08.156089 | orchestrator | Thursday 09 April 2026 02:33:57 +0000 (0:00:02.000) 0:04:22.581 ******** 2026-04-09 02:34:08.156092 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:34:08.156096 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:34:08.156100 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:34:08.156103 | orchestrator | 2026-04-09 02:34:08.156107 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-09 02:34:08.156111 | orchestrator | Thursday 09 April 2026 02:33:59 +0000 (0:00:02.408) 0:04:24.990 ******** 2026-04-09 02:34:08.156115 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:34:08.156118 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:34:08.156122 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:34:08.156126 | orchestrator | 2026-04-09 02:34:08.156129 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-09 02:34:08.156133 | orchestrator | Thursday 09 April 2026 02:34:03 +0000 (0:00:03.506) 0:04:28.496 ******** 2026-04-09 02:34:08.156137 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:34:08.156141 | orchestrator | 2026-04-09 02:34:08.156145 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-09 02:34:08.156148 | orchestrator | Thursday 09 April 2026 02:34:04 +0000 (0:00:01.707) 0:04:30.204 ******** 2026-04-09 02:34:08.156153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 02:34:08.156158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 02:34:08.156171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 02:34:08.936803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 02:34:08.936924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 02:34:08.936943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 02:34:08.936956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 02:34:08.936970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 02:34:08.937005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 02:34:08.937037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 02:34:08.937050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 02:34:08.937068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 02:34:08.937087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 02:34:08.937121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 02:34:08.937155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 02:34:08.937176 | orchestrator | 2026-04-09 02:34:08.937242 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-09 02:34:08.937268 | orchestrator | Thursday 09 April 2026 02:34:08 +0000 (0:00:03.534) 0:04:33.739 ******** 2026-04-09 02:34:08.937341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 02:34:09.087903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 02:34:09.088003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 02:34:09.088020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 02:34:09.088033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 02:34:09.088073 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:34:09.088087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 02:34:09.088099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 02:34:09.088135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 02:34:09.088143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 02:34:09.088149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 02:34:09.088162 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:34:09.088169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 02:34:09.088176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 02:34:09.088182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 02:34:09.088199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 02:34:21.110227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 02:34:21.110398 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:34:21.110412 | orchestrator | 2026-04-09 02:34:21.110420 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-09 02:34:21.110429 | orchestrator | Thursday 09 April 2026 02:34:09 +0000 (0:00:00.755) 0:04:34.494 ******** 2026-04-09 02:34:21.110437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 02:34:21.110468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 02:34:21.110476 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:34:21.110482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 02:34:21.110488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 02:34:21.110494 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:34:21.110500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 02:34:21.110506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 02:34:21.110512 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:34:21.110518 | orchestrator | 2026-04-09 02:34:21.110524 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-09 02:34:21.110530 | orchestrator | Thursday 09 April 2026 02:34:10 +0000 (0:00:00.978) 0:04:35.473 ******** 2026-04-09 02:34:21.110536 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:34:21.110542 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:34:21.110548 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:34:21.110554 | orchestrator | 2026-04-09 02:34:21.110560 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-09 02:34:21.110566 | orchestrator | Thursday 09 April 2026 02:34:11 +0000 (0:00:01.783) 0:04:37.256 ******** 2026-04-09 02:34:21.110572 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:34:21.110577 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:34:21.110584 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:34:21.110590 | orchestrator | 2026-04-09 02:34:21.110596 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-09 02:34:21.110602 | orchestrator | Thursday 09 April 2026 02:34:14 +0000 (0:00:02.207) 0:04:39.463 ******** 2026-04-09 02:34:21.110608 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:34:21.110614 | orchestrator | 2026-04-09 02:34:21.110620 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-09 02:34:21.110626 | orchestrator | Thursday 09 April 2026 02:34:15 +0000 (0:00:01.496) 0:04:40.960 ******** 2026-04-09 02:34:21.110640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 02:34:21.110668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 02:34:21.110680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 02:34:21.110688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 02:34:21.110698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 02:34:21.110712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 02:34:23.260365 | orchestrator | 2026-04-09 02:34:23.260527 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-09 02:34:23.260557 | orchestrator | Thursday 09 April 2026 02:34:21 +0000 (0:00:05.550) 0:04:46.510 ******** 2026-04-09 02:34:23.260581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-09 02:34:23.260607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-09 02:34:23.260630 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:34:23.260678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-09 02:34:23.260746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-09 02:34:23.260836 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:34:23.260860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-09 02:34:23.260879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-09 02:34:23.260899 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:34:23.260917 | orchestrator | 2026-04-09 02:34:23.260937 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-09 02:34:23.260957 | orchestrator | Thursday 09 April 2026 02:34:22 +0000 (0:00:01.165) 0:04:47.675 ******** 2026-04-09 02:34:23.260976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-09 02:34:23.260999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-09 02:34:23.261024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-09 02:34:23.261056 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:34:23.261075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-09 02:34:23.261087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-09 02:34:23.261098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-09 02:34:23.261109 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:34:23.261120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-09 02:34:23.261131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-09 02:34:23.261160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-09 02:34:29.793625 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:34:29.793727 | orchestrator | 2026-04-09 02:34:29.793740 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-09 02:34:29.793749 | orchestrator | Thursday 09 April 2026 02:34:23 +0000 (0:00:00.984) 0:04:48.660 ******** 2026-04-09 02:34:29.793758 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:34:29.793766 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:34:29.793774 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:34:29.793782 | orchestrator | 2026-04-09 02:34:29.793791 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-09 02:34:29.793799 | orchestrator | Thursday 09 April 2026 02:34:23 +0000 (0:00:00.450) 0:04:49.110 ******** 2026-04-09 02:34:29.793807 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:34:29.793815 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:34:29.793823 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:34:29.793831 | orchestrator | 2026-04-09 02:34:29.793839 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-09 02:34:29.793847 | orchestrator | Thursday 09 April 2026 02:34:25 +0000 (0:00:01.651) 0:04:50.762 ******** 2026-04-09 02:34:29.793854 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:34:29.793863 | orchestrator | 2026-04-09 02:34:29.793871 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-09 02:34:29.793879 | orchestrator | Thursday 09 April 2026 02:34:27 +0000 (0:00:01.897) 0:04:52.659 ******** 2026-04-09 02:34:29.793890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-09 02:34:29.793925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 02:34:29.793947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:34:29.793956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:34:29.793966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 02:34:29.793991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-09 02:34:29.794000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 02:34:29.794008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:34:29.794075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:34:29.794087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 02:34:29.794100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-09 02:34:29.794109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 02:34:29.794126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:34:31.714272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:34:31.714369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 02:34:31.714395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-09 02:34:31.714414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-09 02:34:31.714419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-09 02:34:31.714436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-09 02:34:31.714446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:34:31.714450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-09 02:34:31.714458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-09 02:34:31.714463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:34:31.714471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:34:32.460272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:34:32.460412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:34:32.460422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 02:34:32.460427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:34:32.460443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 02:34:32.460449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 02:34:32.460454 | orchestrator | 2026-04-09 02:34:32.460460 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-09 02:34:32.460466 | orchestrator | Thursday 09 April 2026 02:34:31 +0000 (0:00:04.611) 0:04:57.271 ******** 2026-04-09 02:34:32.460471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-09 02:34:32.460490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 02:34:32.460504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:34:32.460508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:34:32.460517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-09 02:34:32.460522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 02:34:32.460527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 02:34:32.460536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-09 02:34:32.634504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:34:32.634590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-09 02:34:32.634603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:34:32.634626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:34:32.634636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 02:34:32.634644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:34:32.634668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-09 02:34:32.634694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 02:34:32.634698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-09 02:34:32.634704 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:34:32.634717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:34:32.634723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:34:32.634729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 02:34:32.634736 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:34:32.634753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-09 02:34:34.431869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 02:34:34.431979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:34:34.431997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:34:34.432027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 02:34:34.432043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-09 02:34:34.432081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-09 02:34:34.432113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:34:34.432125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 02:34:34.432136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 02:34:34.432148 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:34:34.432162 | orchestrator | 2026-04-09 02:34:34.432174 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-09 02:34:34.432186 | orchestrator | Thursday 09 April 2026 02:34:32 +0000 (0:00:00.933) 0:04:58.204 ******** 2026-04-09 02:34:34.432203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-09 02:34:34.432218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-09 02:34:34.432231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-09 02:34:34.432244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-09 02:34:34.432257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-09 02:34:34.432276 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:34:34.432323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-09 02:34:34.432341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-09 02:34:34.432353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-09 02:34:34.432364 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:34:34.432375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-09 02:34:34.432395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-09 02:34:41.950575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-09 02:34:41.950683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-09 02:34:41.950701 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:34:41.950723 | orchestrator | 2026-04-09 02:34:41.950744 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-09 02:34:41.950765 | orchestrator | Thursday 09 April 2026 02:34:34 +0000 (0:00:01.621) 0:04:59.825 ******** 2026-04-09 02:34:41.950785 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:34:41.950803 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:34:41.950822 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:34:41.950843 | orchestrator | 2026-04-09 02:34:41.950862 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-09 02:34:41.950882 | orchestrator | Thursday 09 April 2026 02:34:34 +0000 (0:00:00.470) 0:05:00.296 ******** 2026-04-09 02:34:41.950903 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:34:41.950922 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:34:41.950966 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:34:41.951003 | orchestrator | 2026-04-09 02:34:41.951022 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-09 02:34:41.951043 | orchestrator | Thursday 09 April 2026 02:34:36 +0000 (0:00:01.414) 0:05:01.711 ******** 2026-04-09 02:34:41.951064 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:34:41.951084 | orchestrator | 2026-04-09 02:34:41.951105 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-09 02:34:41.951127 | orchestrator | Thursday 09 April 2026 02:34:38 +0000 (0:00:01.932) 0:05:03.643 ******** 2026-04-09 02:34:41.951154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 02:34:41.951217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 02:34:41.951268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 02:34:41.951318 | orchestrator | 2026-04-09 02:34:41.951392 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-09 02:34:41.951417 | orchestrator | Thursday 09 April 2026 02:34:40 +0000 (0:00:02.187) 0:05:05.831 ******** 2026-04-09 02:34:41.951442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 02:34:41.951479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 02:34:41.951499 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:34:41.951519 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:34:41.951540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 02:34:41.951559 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:34:41.951577 | orchestrator | 2026-04-09 02:34:41.951595 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-09 02:34:41.951614 | orchestrator | Thursday 09 April 2026 02:34:40 +0000 (0:00:00.509) 0:05:06.341 ******** 2026-04-09 02:34:41.951634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-09 02:34:41.951667 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:34:53.002433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-09 02:34:53.002575 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:34:53.002601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-09 02:34:53.002621 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:34:53.002639 | orchestrator | 2026-04-09 02:34:53.002658 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-09 02:34:53.002678 | orchestrator | Thursday 09 April 2026 02:34:41 +0000 (0:00:01.016) 0:05:07.357 ******** 2026-04-09 02:34:53.002696 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:34:53.002714 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:34:53.002732 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:34:53.002750 | orchestrator | 2026-04-09 02:34:53.002767 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-09 02:34:53.002785 | orchestrator | Thursday 09 April 2026 02:34:42 +0000 (0:00:00.488) 0:05:07.845 ******** 2026-04-09 02:34:53.002802 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:34:53.003070 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:34:53.003102 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:34:53.003124 | orchestrator | 2026-04-09 02:34:53.003169 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-09 02:34:53.003206 | orchestrator | Thursday 09 April 2026 02:34:43 +0000 (0:00:01.406) 0:05:09.252 ******** 2026-04-09 02:34:53.003227 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:34:53.003248 | orchestrator | 2026-04-09 02:34:53.003267 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-09 02:34:53.003376 | orchestrator | Thursday 09 April 2026 02:34:45 +0000 (0:00:01.538) 0:05:10.791 ******** 2026-04-09 02:34:53.003422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-09 02:34:53.003452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-09 02:34:53.003504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-09 02:34:53.003527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-09 02:34:53.003575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-09 02:34:53.003596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-09 02:34:53.003615 | orchestrator | 2026-04-09 02:34:53.003634 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-09 02:34:53.003651 | orchestrator | Thursday 09 April 2026 02:34:52 +0000 (0:00:06.913) 0:05:17.704 ******** 2026-04-09 02:34:53.003671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-09 02:34:53.003704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-09 02:34:59.046895 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:34:59.047013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-09 02:34:59.047030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-09 02:34:59.047040 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:34:59.047048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-09 02:34:59.047056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-09 02:34:59.047085 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:34:59.047094 | orchestrator | 2026-04-09 02:34:59.047099 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-09 02:34:59.047105 | orchestrator | Thursday 09 April 2026 02:34:52 +0000 (0:00:00.701) 0:05:18.406 ******** 2026-04-09 02:34:59.047124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-09 02:34:59.047132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-09 02:34:59.047138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-09 02:34:59.047147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-09 02:34:59.047151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-09 02:34:59.047156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-09 02:34:59.047161 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:34:59.047165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-09 02:34:59.047170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-09 02:34:59.047174 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:34:59.047178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-09 02:34:59.047183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-09 02:34:59.047187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-09 02:34:59.047191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-09 02:34:59.047196 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:34:59.047205 | orchestrator | 2026-04-09 02:34:59.047209 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-09 02:34:59.047214 | orchestrator | Thursday 09 April 2026 02:34:54 +0000 (0:00:01.031) 0:05:19.438 ******** 2026-04-09 02:34:59.047218 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:34:59.047223 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:34:59.047227 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:34:59.047231 | orchestrator | 2026-04-09 02:34:59.047236 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-09 02:34:59.047240 | orchestrator | Thursday 09 April 2026 02:34:55 +0000 (0:00:01.277) 0:05:20.715 ******** 2026-04-09 02:34:59.047244 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:34:59.047249 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:34:59.047253 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:34:59.047257 | orchestrator | 2026-04-09 02:34:59.047262 | orchestrator | TASK [include_role : swift] **************************************************** 2026-04-09 02:34:59.047266 | orchestrator | Thursday 09 April 2026 02:34:57 +0000 (0:00:02.358) 0:05:23.073 ******** 2026-04-09 02:34:59.047319 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:34:59.047324 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:34:59.047329 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:34:59.047333 | orchestrator | 2026-04-09 02:34:59.047337 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-09 02:34:59.047341 | orchestrator | Thursday 09 April 2026 02:34:58 +0000 (0:00:00.702) 0:05:23.775 ******** 2026-04-09 02:34:59.047346 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:34:59.047350 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:34:59.047354 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:34:59.047358 | orchestrator | 2026-04-09 02:34:59.047363 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-09 02:34:59.047367 | orchestrator | Thursday 09 April 2026 02:34:58 +0000 (0:00:00.337) 0:05:24.113 ******** 2026-04-09 02:34:59.047371 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:34:59.047380 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:35:43.020676 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:35:43.020768 | orchestrator | 2026-04-09 02:35:43.020778 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-09 02:35:43.020786 | orchestrator | Thursday 09 April 2026 02:34:59 +0000 (0:00:00.338) 0:05:24.452 ******** 2026-04-09 02:35:43.020793 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:35:43.020798 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:35:43.020804 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:35:43.020810 | orchestrator | 2026-04-09 02:35:43.020816 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-09 02:35:43.020821 | orchestrator | Thursday 09 April 2026 02:34:59 +0000 (0:00:00.362) 0:05:24.815 ******** 2026-04-09 02:35:43.020827 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:35:43.020833 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:35:43.020839 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:35:43.020844 | orchestrator | 2026-04-09 02:35:43.020850 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-09 02:35:43.020867 | orchestrator | Thursday 09 April 2026 02:35:00 +0000 (0:00:00.668) 0:05:25.483 ******** 2026-04-09 02:35:43.020874 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:35:43.020879 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:35:43.020885 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:35:43.020890 | orchestrator | 2026-04-09 02:35:43.020896 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-09 02:35:43.020901 | orchestrator | Thursday 09 April 2026 02:35:00 +0000 (0:00:00.585) 0:05:26.069 ******** 2026-04-09 02:35:43.020906 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:35:43.020913 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:35:43.020919 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:35:43.020924 | orchestrator | 2026-04-09 02:35:43.020930 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-09 02:35:43.020951 | orchestrator | Thursday 09 April 2026 02:35:01 +0000 (0:00:00.698) 0:05:26.767 ******** 2026-04-09 02:35:43.020957 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:35:43.020963 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:35:43.020968 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:35:43.020973 | orchestrator | 2026-04-09 02:35:43.020979 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-09 02:35:43.020984 | orchestrator | Thursday 09 April 2026 02:35:02 +0000 (0:00:00.772) 0:05:27.540 ******** 2026-04-09 02:35:43.020990 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:35:43.020995 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:35:43.021000 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:35:43.021006 | orchestrator | 2026-04-09 02:35:43.021011 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-09 02:35:43.021017 | orchestrator | Thursday 09 April 2026 02:35:03 +0000 (0:00:00.936) 0:05:28.477 ******** 2026-04-09 02:35:43.021022 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:35:43.021027 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:35:43.021032 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:35:43.021038 | orchestrator | 2026-04-09 02:35:43.021043 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-09 02:35:43.021049 | orchestrator | Thursday 09 April 2026 02:35:03 +0000 (0:00:00.871) 0:05:29.348 ******** 2026-04-09 02:35:43.021054 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:35:43.021059 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:35:43.021065 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:35:43.021071 | orchestrator | 2026-04-09 02:35:43.021080 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-09 02:35:43.021089 | orchestrator | Thursday 09 April 2026 02:35:04 +0000 (0:00:00.888) 0:05:30.236 ******** 2026-04-09 02:35:43.021097 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:35:43.021106 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:35:43.021114 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:35:43.021123 | orchestrator | 2026-04-09 02:35:43.021131 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-09 02:35:43.021140 | orchestrator | Thursday 09 April 2026 02:35:09 +0000 (0:00:04.653) 0:05:34.890 ******** 2026-04-09 02:35:43.021148 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:35:43.021156 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:35:43.021164 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:35:43.021173 | orchestrator | 2026-04-09 02:35:43.021182 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-09 02:35:43.021190 | orchestrator | Thursday 09 April 2026 02:35:12 +0000 (0:00:03.212) 0:05:38.103 ******** 2026-04-09 02:35:43.021198 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:35:43.021206 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:35:43.021214 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:35:43.021223 | orchestrator | 2026-04-09 02:35:43.021231 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-09 02:35:43.021240 | orchestrator | Thursday 09 April 2026 02:35:28 +0000 (0:00:15.439) 0:05:53.542 ******** 2026-04-09 02:35:43.021275 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:35:43.021284 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:35:43.021293 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:35:43.021302 | orchestrator | 2026-04-09 02:35:43.021311 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-09 02:35:43.021320 | orchestrator | Thursday 09 April 2026 02:35:28 +0000 (0:00:00.801) 0:05:54.344 ******** 2026-04-09 02:35:43.021329 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:35:43.021339 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:35:43.021361 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:35:43.021379 | orchestrator | 2026-04-09 02:35:43.021388 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-09 02:35:43.021398 | orchestrator | Thursday 09 April 2026 02:35:33 +0000 (0:00:04.597) 0:05:58.942 ******** 2026-04-09 02:35:43.021423 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:35:43.021434 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:35:43.021443 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:35:43.021452 | orchestrator | 2026-04-09 02:35:43.021461 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-09 02:35:43.021470 | orchestrator | Thursday 09 April 2026 02:35:34 +0000 (0:00:00.756) 0:05:59.698 ******** 2026-04-09 02:35:43.021477 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:35:43.021484 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:35:43.021491 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:35:43.021497 | orchestrator | 2026-04-09 02:35:43.021522 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-09 02:35:43.021531 | orchestrator | Thursday 09 April 2026 02:35:34 +0000 (0:00:00.395) 0:06:00.093 ******** 2026-04-09 02:35:43.021540 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:35:43.021551 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:35:43.021561 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:35:43.021570 | orchestrator | 2026-04-09 02:35:43.021579 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-09 02:35:43.021586 | orchestrator | Thursday 09 April 2026 02:35:35 +0000 (0:00:00.375) 0:06:00.469 ******** 2026-04-09 02:35:43.021593 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:35:43.021600 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:35:43.021606 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:35:43.021613 | orchestrator | 2026-04-09 02:35:43.021619 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-09 02:35:43.021626 | orchestrator | Thursday 09 April 2026 02:35:35 +0000 (0:00:00.365) 0:06:00.835 ******** 2026-04-09 02:35:43.021632 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:35:43.021645 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:35:43.021652 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:35:43.021659 | orchestrator | 2026-04-09 02:35:43.021665 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-09 02:35:43.021672 | orchestrator | Thursday 09 April 2026 02:35:36 +0000 (0:00:00.731) 0:06:01.566 ******** 2026-04-09 02:35:43.021678 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:35:43.021684 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:35:43.021689 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:35:43.021694 | orchestrator | 2026-04-09 02:35:43.021700 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-09 02:35:43.021705 | orchestrator | Thursday 09 April 2026 02:35:36 +0000 (0:00:00.375) 0:06:01.942 ******** 2026-04-09 02:35:43.021711 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:35:43.021716 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:35:43.021722 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:35:43.021727 | orchestrator | 2026-04-09 02:35:43.021732 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-09 02:35:43.021738 | orchestrator | Thursday 09 April 2026 02:35:41 +0000 (0:00:04.735) 0:06:06.677 ******** 2026-04-09 02:35:43.021743 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:35:43.021749 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:35:43.021754 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:35:43.021759 | orchestrator | 2026-04-09 02:35:43.021765 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:35:43.021776 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-09 02:35:43.021789 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-09 02:35:43.021800 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-09 02:35:43.021810 | orchestrator | 2026-04-09 02:35:43.021826 | orchestrator | 2026-04-09 02:35:43.021836 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:35:43.021844 | orchestrator | Thursday 09 April 2026 02:35:42 +0000 (0:00:00.833) 0:06:07.511 ******** 2026-04-09 02:35:43.021853 | orchestrator | =============================================================================== 2026-04-09 02:35:43.021862 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 15.44s 2026-04-09 02:35:43.021870 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.91s 2026-04-09 02:35:43.021877 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.55s 2026-04-09 02:35:43.021885 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.74s 2026-04-09 02:35:43.021893 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.65s 2026-04-09 02:35:43.021900 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.61s 2026-04-09 02:35:43.021909 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.61s 2026-04-09 02:35:43.021918 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.60s 2026-04-09 02:35:43.021926 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.47s 2026-04-09 02:35:43.021934 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.34s 2026-04-09 02:35:43.021943 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.31s 2026-04-09 02:35:43.021952 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.03s 2026-04-09 02:35:43.021961 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.81s 2026-04-09 02:35:43.021969 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.75s 2026-04-09 02:35:43.021979 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.69s 2026-04-09 02:35:43.021988 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.67s 2026-04-09 02:35:43.021997 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.64s 2026-04-09 02:35:43.022006 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.59s 2026-04-09 02:35:43.022014 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.53s 2026-04-09 02:35:43.022086 | orchestrator | proxysql-config : Copying over nova-cell ProxySQL rules config ---------- 3.51s 2026-04-09 02:35:45.624755 | orchestrator | 2026-04-09 02:35:45 | INFO  | Task 88c92c48-f38d-4f2a-9ebc-d87c547849d3 (opensearch) was prepared for execution. 2026-04-09 02:35:45.624866 | orchestrator | 2026-04-09 02:35:45 | INFO  | It takes a moment until task 88c92c48-f38d-4f2a-9ebc-d87c547849d3 (opensearch) has been started and output is visible here. 2026-04-09 02:35:56.904449 | orchestrator | 2026-04-09 02:35:56.904635 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 02:35:56.904656 | orchestrator | 2026-04-09 02:35:56.904668 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 02:35:56.904680 | orchestrator | Thursday 09 April 2026 02:35:50 +0000 (0:00:00.266) 0:00:00.266 ******** 2026-04-09 02:35:56.904691 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:35:56.904704 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:35:56.904715 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:35:56.904726 | orchestrator | 2026-04-09 02:35:56.904744 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 02:35:56.904798 | orchestrator | Thursday 09 April 2026 02:35:50 +0000 (0:00:00.348) 0:00:00.615 ******** 2026-04-09 02:35:56.904822 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-09 02:35:56.904842 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-09 02:35:56.904861 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-09 02:35:56.904881 | orchestrator | 2026-04-09 02:35:56.904899 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-09 02:35:56.904960 | orchestrator | 2026-04-09 02:35:56.904984 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-09 02:35:56.905004 | orchestrator | Thursday 09 April 2026 02:35:50 +0000 (0:00:00.470) 0:00:01.085 ******** 2026-04-09 02:35:56.905024 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:35:56.905039 | orchestrator | 2026-04-09 02:35:56.905051 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-09 02:35:56.905064 | orchestrator | Thursday 09 April 2026 02:35:51 +0000 (0:00:00.525) 0:00:01.611 ******** 2026-04-09 02:35:56.905076 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 02:35:56.905089 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 02:35:56.905103 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 02:35:56.905116 | orchestrator | 2026-04-09 02:35:56.905128 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-09 02:35:56.905140 | orchestrator | Thursday 09 April 2026 02:35:52 +0000 (0:00:00.695) 0:00:02.306 ******** 2026-04-09 02:35:56.905157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 02:35:56.905177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 02:35:56.905217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 02:35:56.905300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 02:35:56.905326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 02:35:56.905339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 02:35:56.905351 | orchestrator | 2026-04-09 02:35:56.905362 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-09 02:35:56.905372 | orchestrator | Thursday 09 April 2026 02:35:53 +0000 (0:00:01.736) 0:00:04.043 ******** 2026-04-09 02:35:56.905383 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:35:56.905394 | orchestrator | 2026-04-09 02:35:56.905405 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-09 02:35:56.905416 | orchestrator | Thursday 09 April 2026 02:35:54 +0000 (0:00:00.549) 0:00:04.593 ******** 2026-04-09 02:35:56.905442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 02:35:57.746013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 02:35:57.746139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 02:35:57.746148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 02:35:57.746153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 02:35:57.746203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 02:35:57.746209 | orchestrator | 2026-04-09 02:35:57.746215 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-09 02:35:57.746220 | orchestrator | Thursday 09 April 2026 02:35:56 +0000 (0:00:02.483) 0:00:07.077 ******** 2026-04-09 02:35:57.746226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-09 02:35:57.746231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-09 02:35:57.746280 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:35:57.746287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-09 02:35:57.746319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-09 02:35:58.893915 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:35:58.894013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-09 02:35:58.894088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-09 02:35:58.894099 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:35:58.894108 | orchestrator | 2026-04-09 02:35:58.894118 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-09 02:35:58.894127 | orchestrator | Thursday 09 April 2026 02:35:57 +0000 (0:00:00.843) 0:00:07.920 ******** 2026-04-09 02:35:58.894159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-09 02:35:58.894181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-09 02:35:58.894204 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:35:58.894213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-09 02:35:58.894222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-09 02:35:58.894230 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:35:58.894275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-09 02:35:58.894321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-09 02:35:58.894330 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:35:58.894338 | orchestrator | 2026-04-09 02:35:58.894346 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-09 02:35:58.894362 | orchestrator | Thursday 09 April 2026 02:35:58 +0000 (0:00:01.141) 0:00:09.061 ******** 2026-04-09 02:36:07.404596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 02:36:07.404747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 02:36:07.404777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 02:36:07.404853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 02:36:07.404909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 02:36:07.404932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 02:36:07.404955 | orchestrator | 2026-04-09 02:36:07.404968 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-09 02:36:07.404980 | orchestrator | Thursday 09 April 2026 02:36:01 +0000 (0:00:02.453) 0:00:11.515 ******** 2026-04-09 02:36:07.404992 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:36:07.405004 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:36:07.405015 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:36:07.405026 | orchestrator | 2026-04-09 02:36:07.405037 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-09 02:36:07.405048 | orchestrator | Thursday 09 April 2026 02:36:03 +0000 (0:00:02.466) 0:00:13.982 ******** 2026-04-09 02:36:07.405059 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:36:07.405069 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:36:07.405080 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:36:07.405091 | orchestrator | 2026-04-09 02:36:07.405101 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-04-09 02:36:07.405112 | orchestrator | Thursday 09 April 2026 02:36:05 +0000 (0:00:01.875) 0:00:15.857 ******** 2026-04-09 02:36:07.405124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 02:36:07.405142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 02:36:07.405164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 02:39:01.642212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 02:39:01.642346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 02:39:01.642375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 02:39:01.642383 | orchestrator | 2026-04-09 02:39:01.642391 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-09 02:39:01.642400 | orchestrator | Thursday 09 April 2026 02:36:07 +0000 (0:00:01.720) 0:00:17.578 ******** 2026-04-09 02:39:01.642406 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:39:01.642415 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:39:01.642421 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:39:01.642427 | orchestrator | 2026-04-09 02:39:01.642435 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-09 02:39:01.642441 | orchestrator | Thursday 09 April 2026 02:36:07 +0000 (0:00:00.317) 0:00:17.895 ******** 2026-04-09 02:39:01.642448 | orchestrator | 2026-04-09 02:39:01.642454 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-09 02:39:01.642459 | orchestrator | Thursday 09 April 2026 02:36:07 +0000 (0:00:00.067) 0:00:17.962 ******** 2026-04-09 02:39:01.642465 | orchestrator | 2026-04-09 02:39:01.642471 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-09 02:39:01.642485 | orchestrator | Thursday 09 April 2026 02:36:07 +0000 (0:00:00.069) 0:00:18.032 ******** 2026-04-09 02:39:01.642491 | orchestrator | 2026-04-09 02:39:01.642498 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-09 02:39:01.642521 | orchestrator | Thursday 09 April 2026 02:36:07 +0000 (0:00:00.091) 0:00:18.124 ******** 2026-04-09 02:39:01.642528 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:39:01.642534 | orchestrator | 2026-04-09 02:39:01.642603 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-09 02:39:01.642610 | orchestrator | Thursday 09 April 2026 02:36:08 +0000 (0:00:00.240) 0:00:18.365 ******** 2026-04-09 02:39:01.642617 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:39:01.642623 | orchestrator | 2026-04-09 02:39:01.642630 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-09 02:39:01.642636 | orchestrator | Thursday 09 April 2026 02:36:08 +0000 (0:00:00.691) 0:00:19.056 ******** 2026-04-09 02:39:01.642643 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:39:01.642650 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:39:01.642657 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:39:01.642663 | orchestrator | 2026-04-09 02:39:01.642669 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-09 02:39:01.642676 | orchestrator | Thursday 09 April 2026 02:37:19 +0000 (0:01:11.088) 0:01:30.145 ******** 2026-04-09 02:39:01.642682 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:39:01.642689 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:39:01.642696 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:39:01.642703 | orchestrator | 2026-04-09 02:39:01.642710 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-09 02:39:01.642716 | orchestrator | Thursday 09 April 2026 02:38:50 +0000 (0:01:30.520) 0:03:00.666 ******** 2026-04-09 02:39:01.642724 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:39:01.642732 | orchestrator | 2026-04-09 02:39:01.642739 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-09 02:39:01.642747 | orchestrator | Thursday 09 April 2026 02:38:51 +0000 (0:00:00.539) 0:03:01.206 ******** 2026-04-09 02:39:01.642754 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:39:01.642761 | orchestrator | 2026-04-09 02:39:01.642767 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-09 02:39:01.642773 | orchestrator | Thursday 09 April 2026 02:38:53 +0000 (0:00:02.851) 0:03:04.058 ******** 2026-04-09 02:39:01.642780 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:39:01.642788 | orchestrator | 2026-04-09 02:39:01.642795 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-09 02:39:01.642802 | orchestrator | Thursday 09 April 2026 02:38:56 +0000 (0:00:02.373) 0:03:06.431 ******** 2026-04-09 02:39:01.642810 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:39:01.642818 | orchestrator | 2026-04-09 02:39:01.642826 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-09 02:39:01.642834 | orchestrator | Thursday 09 April 2026 02:38:59 +0000 (0:00:02.794) 0:03:09.226 ******** 2026-04-09 02:39:01.642842 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:39:01.642850 | orchestrator | 2026-04-09 02:39:01.642857 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:39:01.642866 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 02:39:01.642875 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 02:39:01.642890 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 02:39:01.642897 | orchestrator | 2026-04-09 02:39:01.642905 | orchestrator | 2026-04-09 02:39:01.642918 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:39:01.642925 | orchestrator | Thursday 09 April 2026 02:39:01 +0000 (0:00:02.568) 0:03:11.795 ******** 2026-04-09 02:39:01.642932 | orchestrator | =============================================================================== 2026-04-09 02:39:01.642940 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 90.52s 2026-04-09 02:39:01.642947 | orchestrator | opensearch : Restart opensearch container ------------------------------ 71.09s 2026-04-09 02:39:01.642954 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.85s 2026-04-09 02:39:01.642961 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.79s 2026-04-09 02:39:01.642969 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.57s 2026-04-09 02:39:01.642977 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.48s 2026-04-09 02:39:01.642985 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.47s 2026-04-09 02:39:01.642992 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.45s 2026-04-09 02:39:01.642999 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.37s 2026-04-09 02:39:01.643007 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.88s 2026-04-09 02:39:01.643015 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.74s 2026-04-09 02:39:01.643022 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.72s 2026-04-09 02:39:01.643029 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.14s 2026-04-09 02:39:01.643036 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.84s 2026-04-09 02:39:01.643044 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.70s 2026-04-09 02:39:01.643051 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.69s 2026-04-09 02:39:01.643069 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2026-04-09 02:39:02.058848 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2026-04-09 02:39:02.058964 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2026-04-09 02:39:02.058987 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2026-04-09 02:39:04.835839 | orchestrator | 2026-04-09 02:39:04 | INFO  | Task 9a0a5da7-e1c5-456e-bb2a-63f2b3343cd9 (memcached) was prepared for execution. 2026-04-09 02:39:04.835907 | orchestrator | 2026-04-09 02:39:04 | INFO  | It takes a moment until task 9a0a5da7-e1c5-456e-bb2a-63f2b3343cd9 (memcached) has been started and output is visible here. 2026-04-09 02:39:17.458830 | orchestrator | 2026-04-09 02:39:17.458934 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 02:39:17.458950 | orchestrator | 2026-04-09 02:39:17.458961 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 02:39:17.458971 | orchestrator | Thursday 09 April 2026 02:39:09 +0000 (0:00:00.297) 0:00:00.297 ******** 2026-04-09 02:39:17.458981 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:39:17.458992 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:39:17.459002 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:39:17.459012 | orchestrator | 2026-04-09 02:39:17.459022 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 02:39:17.459033 | orchestrator | Thursday 09 April 2026 02:39:09 +0000 (0:00:00.340) 0:00:00.638 ******** 2026-04-09 02:39:17.459045 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-09 02:39:17.459056 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-09 02:39:17.459066 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-09 02:39:17.459077 | orchestrator | 2026-04-09 02:39:17.459088 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-09 02:39:17.459129 | orchestrator | 2026-04-09 02:39:17.459215 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-09 02:39:17.459223 | orchestrator | Thursday 09 April 2026 02:39:10 +0000 (0:00:00.470) 0:00:01.108 ******** 2026-04-09 02:39:17.459230 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:39:17.459236 | orchestrator | 2026-04-09 02:39:17.459242 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-09 02:39:17.459248 | orchestrator | Thursday 09 April 2026 02:39:10 +0000 (0:00:00.514) 0:00:01.623 ******** 2026-04-09 02:39:17.459254 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-09 02:39:17.459260 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-09 02:39:17.459266 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-09 02:39:17.459271 | orchestrator | 2026-04-09 02:39:17.459277 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-09 02:39:17.459283 | orchestrator | Thursday 09 April 2026 02:39:11 +0000 (0:00:00.686) 0:00:02.309 ******** 2026-04-09 02:39:17.459288 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-09 02:39:17.459294 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-09 02:39:17.459300 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-09 02:39:17.459306 | orchestrator | 2026-04-09 02:39:17.459311 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-04-09 02:39:17.459317 | orchestrator | Thursday 09 April 2026 02:39:13 +0000 (0:00:01.895) 0:00:04.204 ******** 2026-04-09 02:39:17.459336 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:39:17.459342 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:39:17.459347 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:39:17.459353 | orchestrator | 2026-04-09 02:39:17.459359 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-09 02:39:17.459365 | orchestrator | Thursday 09 April 2026 02:39:14 +0000 (0:00:01.503) 0:00:05.708 ******** 2026-04-09 02:39:17.459370 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:39:17.459376 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:39:17.459381 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:39:17.459387 | orchestrator | 2026-04-09 02:39:17.459393 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:39:17.459400 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:39:17.459408 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:39:17.459415 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:39:17.459421 | orchestrator | 2026-04-09 02:39:17.459428 | orchestrator | 2026-04-09 02:39:17.459436 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:39:17.459442 | orchestrator | Thursday 09 April 2026 02:39:16 +0000 (0:00:02.157) 0:00:07.865 ******** 2026-04-09 02:39:17.459449 | orchestrator | =============================================================================== 2026-04-09 02:39:17.459456 | orchestrator | memcached : Restart memcached container --------------------------------- 2.16s 2026-04-09 02:39:17.459462 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.90s 2026-04-09 02:39:17.459469 | orchestrator | memcached : Check memcached container ----------------------------------- 1.50s 2026-04-09 02:39:17.459476 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.69s 2026-04-09 02:39:17.459482 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.51s 2026-04-09 02:39:17.459489 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2026-04-09 02:39:17.459504 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-04-09 02:39:20.085091 | orchestrator | 2026-04-09 02:39:20 | INFO  | Task dfab68e5-1ffc-4084-95e6-dbdc98615107 (redis) was prepared for execution. 2026-04-09 02:39:20.085202 | orchestrator | 2026-04-09 02:39:20 | INFO  | It takes a moment until task dfab68e5-1ffc-4084-95e6-dbdc98615107 (redis) has been started and output is visible here. 2026-04-09 02:39:29.531957 | orchestrator | 2026-04-09 02:39:29.532038 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 02:39:29.532046 | orchestrator | 2026-04-09 02:39:29.532051 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 02:39:29.532057 | orchestrator | Thursday 09 April 2026 02:39:24 +0000 (0:00:00.291) 0:00:00.291 ******** 2026-04-09 02:39:29.532062 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:39:29.532068 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:39:29.532073 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:39:29.532077 | orchestrator | 2026-04-09 02:39:29.532082 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 02:39:29.532087 | orchestrator | Thursday 09 April 2026 02:39:25 +0000 (0:00:00.358) 0:00:00.649 ******** 2026-04-09 02:39:29.532092 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-09 02:39:29.532098 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-09 02:39:29.532102 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-09 02:39:29.532107 | orchestrator | 2026-04-09 02:39:29.532112 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-09 02:39:29.532116 | orchestrator | 2026-04-09 02:39:29.532121 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-09 02:39:29.532126 | orchestrator | Thursday 09 April 2026 02:39:25 +0000 (0:00:00.439) 0:00:01.089 ******** 2026-04-09 02:39:29.532155 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:39:29.532161 | orchestrator | 2026-04-09 02:39:29.532166 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-09 02:39:29.532179 | orchestrator | Thursday 09 April 2026 02:39:25 +0000 (0:00:00.518) 0:00:01.607 ******** 2026-04-09 02:39:29.532193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 02:39:29.532204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 02:39:29.532210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 02:39:29.532232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 02:39:29.532250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 02:39:29.532255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 02:39:29.532260 | orchestrator | 2026-04-09 02:39:29.532264 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-09 02:39:29.532269 | orchestrator | Thursday 09 April 2026 02:39:27 +0000 (0:00:01.148) 0:00:02.756 ******** 2026-04-09 02:39:29.532274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 02:39:29.532344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 02:39:29.532354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 02:39:29.532364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 02:39:29.532374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 02:39:33.667010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 02:39:33.667112 | orchestrator | 2026-04-09 02:39:33.667161 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-09 02:39:33.667175 | orchestrator | Thursday 09 April 2026 02:39:29 +0000 (0:00:02.410) 0:00:05.166 ******** 2026-04-09 02:39:33.667186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 02:39:33.667214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 02:39:33.667224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 02:39:33.667255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 02:39:33.667265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 02:39:33.667290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 02:39:33.667300 | orchestrator | 2026-04-09 02:39:33.667309 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-04-09 02:39:33.667318 | orchestrator | Thursday 09 April 2026 02:39:31 +0000 (0:00:02.372) 0:00:07.539 ******** 2026-04-09 02:39:33.667326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 02:39:33.667336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 02:39:33.667350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 02:39:33.667367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 02:39:33.667376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 02:39:33.667393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 02:39:45.346362 | orchestrator | 2026-04-09 02:39:45.346465 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-09 02:39:45.346480 | orchestrator | Thursday 09 April 2026 02:39:33 +0000 (0:00:01.514) 0:00:09.053 ******** 2026-04-09 02:39:45.346488 | orchestrator | 2026-04-09 02:39:45.346494 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-09 02:39:45.346500 | orchestrator | Thursday 09 April 2026 02:39:33 +0000 (0:00:00.078) 0:00:09.132 ******** 2026-04-09 02:39:45.346507 | orchestrator | 2026-04-09 02:39:45.346514 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-09 02:39:45.346520 | orchestrator | Thursday 09 April 2026 02:39:33 +0000 (0:00:00.068) 0:00:09.200 ******** 2026-04-09 02:39:45.346527 | orchestrator | 2026-04-09 02:39:45.346533 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-09 02:39:45.346540 | orchestrator | Thursday 09 April 2026 02:39:33 +0000 (0:00:00.099) 0:00:09.300 ******** 2026-04-09 02:39:45.346547 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:39:45.346555 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:39:45.346563 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:39:45.346567 | orchestrator | 2026-04-09 02:39:45.346571 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-09 02:39:45.346576 | orchestrator | Thursday 09 April 2026 02:39:41 +0000 (0:00:08.019) 0:00:17.319 ******** 2026-04-09 02:39:45.346597 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:39:45.346601 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:39:45.346605 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:39:45.346609 | orchestrator | 2026-04-09 02:39:45.346613 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:39:45.346618 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:39:45.346623 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:39:45.346638 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:39:45.346642 | orchestrator | 2026-04-09 02:39:45.346646 | orchestrator | 2026-04-09 02:39:45.346650 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:39:45.346654 | orchestrator | Thursday 09 April 2026 02:39:44 +0000 (0:00:03.284) 0:00:20.604 ******** 2026-04-09 02:39:45.346657 | orchestrator | =============================================================================== 2026-04-09 02:39:45.346661 | orchestrator | redis : Restart redis container ----------------------------------------- 8.02s 2026-04-09 02:39:45.346665 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.28s 2026-04-09 02:39:45.346669 | orchestrator | redis : Copying over default config.json files -------------------------- 2.41s 2026-04-09 02:39:45.346672 | orchestrator | redis : Copying over redis config files --------------------------------- 2.37s 2026-04-09 02:39:45.346676 | orchestrator | redis : Check redis containers ------------------------------------------ 1.51s 2026-04-09 02:39:45.346680 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.15s 2026-04-09 02:39:45.346684 | orchestrator | redis : include_tasks --------------------------------------------------- 0.52s 2026-04-09 02:39:45.346687 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2026-04-09 02:39:45.346691 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2026-04-09 02:39:45.346695 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.25s 2026-04-09 02:39:47.854843 | orchestrator | 2026-04-09 02:39:47 | INFO  | Task 8ab69c6a-f0c5-43c9-95cf-4a8ae090a118 (mariadb) was prepared for execution. 2026-04-09 02:39:47.854950 | orchestrator | 2026-04-09 02:39:47 | INFO  | It takes a moment until task 8ab69c6a-f0c5-43c9-95cf-4a8ae090a118 (mariadb) has been started and output is visible here. 2026-04-09 02:40:02.433899 | orchestrator | 2026-04-09 02:40:02.434078 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 02:40:02.434103 | orchestrator | 2026-04-09 02:40:02.434145 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 02:40:02.434161 | orchestrator | Thursday 09 April 2026 02:39:52 +0000 (0:00:00.172) 0:00:00.172 ******** 2026-04-09 02:40:02.434175 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:40:02.434189 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:40:02.434203 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:40:02.434217 | orchestrator | 2026-04-09 02:40:02.434231 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 02:40:02.434246 | orchestrator | Thursday 09 April 2026 02:39:52 +0000 (0:00:00.372) 0:00:00.544 ******** 2026-04-09 02:40:02.434260 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-09 02:40:02.434274 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-09 02:40:02.434287 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-09 02:40:02.434300 | orchestrator | 2026-04-09 02:40:02.434314 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-09 02:40:02.434327 | orchestrator | 2026-04-09 02:40:02.434341 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-09 02:40:02.434381 | orchestrator | Thursday 09 April 2026 02:39:53 +0000 (0:00:00.621) 0:00:01.166 ******** 2026-04-09 02:40:02.434396 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 02:40:02.434410 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-09 02:40:02.434423 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-09 02:40:02.434436 | orchestrator | 2026-04-09 02:40:02.434450 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-09 02:40:02.434464 | orchestrator | Thursday 09 April 2026 02:39:53 +0000 (0:00:00.407) 0:00:01.574 ******** 2026-04-09 02:40:02.434479 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:40:02.434494 | orchestrator | 2026-04-09 02:40:02.434508 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-09 02:40:02.434521 | orchestrator | Thursday 09 April 2026 02:39:54 +0000 (0:00:00.570) 0:00:02.145 ******** 2026-04-09 02:40:02.434558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 02:40:02.434601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 02:40:02.434633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 02:40:02.434649 | orchestrator | 2026-04-09 02:40:02.434663 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-09 02:40:02.434677 | orchestrator | Thursday 09 April 2026 02:39:57 +0000 (0:00:02.777) 0:00:04.922 ******** 2026-04-09 02:40:02.434691 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:40:02.434705 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:40:02.434718 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:40:02.434731 | orchestrator | 2026-04-09 02:40:02.434745 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-09 02:40:02.434759 | orchestrator | Thursday 09 April 2026 02:39:57 +0000 (0:00:00.680) 0:00:05.602 ******** 2026-04-09 02:40:02.434773 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:40:02.434786 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:40:02.434799 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:40:02.434812 | orchestrator | 2026-04-09 02:40:02.434825 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-09 02:40:02.434838 | orchestrator | Thursday 09 April 2026 02:39:59 +0000 (0:00:01.465) 0:00:07.067 ******** 2026-04-09 02:40:02.434863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 02:40:10.470644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 02:40:10.470718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 02:40:10.470748 | orchestrator | 2026-04-09 02:40:10.470754 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-09 02:40:10.470760 | orchestrator | Thursday 09 April 2026 02:40:02 +0000 (0:00:03.222) 0:00:10.290 ******** 2026-04-09 02:40:10.470764 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:40:10.470770 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:40:10.470774 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:40:10.470778 | orchestrator | 2026-04-09 02:40:10.470782 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-09 02:40:10.470796 | orchestrator | Thursday 09 April 2026 02:40:03 +0000 (0:00:01.138) 0:00:11.428 ******** 2026-04-09 02:40:10.470800 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:40:10.470803 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:40:10.470807 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:40:10.470811 | orchestrator | 2026-04-09 02:40:10.470815 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-09 02:40:10.470819 | orchestrator | Thursday 09 April 2026 02:40:07 +0000 (0:00:03.841) 0:00:15.270 ******** 2026-04-09 02:40:10.470823 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:40:10.470827 | orchestrator | 2026-04-09 02:40:10.470831 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-09 02:40:10.470835 | orchestrator | Thursday 09 April 2026 02:40:07 +0000 (0:00:00.559) 0:00:15.830 ******** 2026-04-09 02:40:10.470842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 02:40:10.470850 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:40:10.470858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 02:40:15.785736 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:40:15.785821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 02:40:15.785853 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:40:15.785861 | orchestrator | 2026-04-09 02:40:15.785869 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-09 02:40:15.785878 | orchestrator | Thursday 09 April 2026 02:40:10 +0000 (0:00:02.497) 0:00:18.328 ******** 2026-04-09 02:40:15.785887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 02:40:15.785894 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:40:15.785919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 02:40:15.785929 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:40:15.785934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 02:40:15.785938 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:40:15.785941 | orchestrator | 2026-04-09 02:40:15.785945 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-09 02:40:15.785949 | orchestrator | Thursday 09 April 2026 02:40:13 +0000 (0:00:02.855) 0:00:21.184 ******** 2026-04-09 02:40:15.785960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 02:40:18.710105 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:40:18.710267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 02:40:18.710290 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:40:18.710320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 02:40:18.710369 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:40:18.710383 | orchestrator | 2026-04-09 02:40:18.710397 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-04-09 02:40:18.710409 | orchestrator | Thursday 09 April 2026 02:40:15 +0000 (0:00:02.462) 0:00:23.646 ******** 2026-04-09 02:40:18.710435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 02:40:18.710444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 02:40:18.710466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 02:42:38.898237 | orchestrator | 2026-04-09 02:42:38.898319 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-09 02:42:38.898330 | orchestrator | Thursday 09 April 2026 02:40:18 +0000 (0:00:02.926) 0:00:26.573 ******** 2026-04-09 02:42:38.898338 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:42:38.898347 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:42:38.898354 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:42:38.898362 | orchestrator | 2026-04-09 02:42:38.898367 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-09 02:42:38.898371 | orchestrator | Thursday 09 April 2026 02:40:19 +0000 (0:00:00.870) 0:00:27.443 ******** 2026-04-09 02:42:38.898375 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:42:38.898380 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:42:38.898384 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:42:38.898388 | orchestrator | 2026-04-09 02:42:38.898392 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-09 02:42:38.898396 | orchestrator | Thursday 09 April 2026 02:40:20 +0000 (0:00:00.568) 0:00:28.012 ******** 2026-04-09 02:42:38.898402 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:42:38.898410 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:42:38.898419 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:42:38.898425 | orchestrator | 2026-04-09 02:42:38.898431 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-09 02:42:38.898437 | orchestrator | Thursday 09 April 2026 02:40:20 +0000 (0:00:00.370) 0:00:28.383 ******** 2026-04-09 02:42:38.898444 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-09 02:42:38.898451 | orchestrator | ...ignoring 2026-04-09 02:42:38.898457 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-09 02:42:38.898463 | orchestrator | ...ignoring 2026-04-09 02:42:38.898468 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-09 02:42:38.898474 | orchestrator | ...ignoring 2026-04-09 02:42:38.898502 | orchestrator | 2026-04-09 02:42:38.898509 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-09 02:42:38.898515 | orchestrator | Thursday 09 April 2026 02:40:31 +0000 (0:00:10.865) 0:00:39.248 ******** 2026-04-09 02:42:38.898520 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:42:38.898527 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:42:38.898533 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:42:38.898539 | orchestrator | 2026-04-09 02:42:38.898546 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-09 02:42:38.898553 | orchestrator | Thursday 09 April 2026 02:40:31 +0000 (0:00:00.431) 0:00:39.679 ******** 2026-04-09 02:42:38.898557 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:42:38.898561 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:42:38.898565 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:42:38.898569 | orchestrator | 2026-04-09 02:42:38.898572 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-09 02:42:38.898577 | orchestrator | Thursday 09 April 2026 02:40:32 +0000 (0:00:00.781) 0:00:40.460 ******** 2026-04-09 02:42:38.898581 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:42:38.898584 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:42:38.898588 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:42:38.898592 | orchestrator | 2026-04-09 02:42:38.898606 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-09 02:42:38.898610 | orchestrator | Thursday 09 April 2026 02:40:33 +0000 (0:00:00.557) 0:00:41.017 ******** 2026-04-09 02:42:38.898614 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:42:38.898617 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:42:38.898621 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:42:38.898625 | orchestrator | 2026-04-09 02:42:38.898629 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-09 02:42:38.898632 | orchestrator | Thursday 09 April 2026 02:40:33 +0000 (0:00:00.427) 0:00:41.445 ******** 2026-04-09 02:42:38.898636 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:42:38.898640 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:42:38.898644 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:42:38.898647 | orchestrator | 2026-04-09 02:42:38.898651 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-09 02:42:38.898656 | orchestrator | Thursday 09 April 2026 02:40:34 +0000 (0:00:00.454) 0:00:41.900 ******** 2026-04-09 02:42:38.898659 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:42:38.898663 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:42:38.898667 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:42:38.898671 | orchestrator | 2026-04-09 02:42:38.898674 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-09 02:42:38.898678 | orchestrator | Thursday 09 April 2026 02:40:34 +0000 (0:00:00.943) 0:00:42.844 ******** 2026-04-09 02:42:38.898682 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:42:38.898685 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:42:38.898689 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-09 02:42:38.898693 | orchestrator | 2026-04-09 02:42:38.898697 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-09 02:42:38.898701 | orchestrator | Thursday 09 April 2026 02:40:35 +0000 (0:00:00.540) 0:00:43.384 ******** 2026-04-09 02:42:38.898705 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:42:38.898708 | orchestrator | 2026-04-09 02:42:38.898712 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-04-09 02:42:38.898716 | orchestrator | Thursday 09 April 2026 02:40:46 +0000 (0:00:10.653) 0:00:54.037 ******** 2026-04-09 02:42:38.898720 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:42:38.898723 | orchestrator | 2026-04-09 02:42:38.898727 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-09 02:42:38.898731 | orchestrator | Thursday 09 April 2026 02:40:46 +0000 (0:00:00.139) 0:00:54.176 ******** 2026-04-09 02:42:38.898735 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:42:38.898755 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:42:38.898760 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:42:38.898765 | orchestrator | 2026-04-09 02:42:38.898769 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-04-09 02:42:38.898774 | orchestrator | Thursday 09 April 2026 02:40:47 +0000 (0:00:01.050) 0:00:55.227 ******** 2026-04-09 02:42:38.898778 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:42:38.898782 | orchestrator | 2026-04-09 02:42:38.898787 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-04-09 02:42:38.898791 | orchestrator | Thursday 09 April 2026 02:40:55 +0000 (0:00:08.122) 0:01:03.350 ******** 2026-04-09 02:42:38.898796 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:42:38.898800 | orchestrator | 2026-04-09 02:42:38.898805 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-04-09 02:42:38.898809 | orchestrator | Thursday 09 April 2026 02:40:57 +0000 (0:00:01.649) 0:01:05.000 ******** 2026-04-09 02:42:38.898814 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:42:38.898818 | orchestrator | 2026-04-09 02:42:38.898822 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-04-09 02:42:38.898827 | orchestrator | Thursday 09 April 2026 02:40:59 +0000 (0:00:02.499) 0:01:07.499 ******** 2026-04-09 02:42:38.898831 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:42:38.898836 | orchestrator | 2026-04-09 02:42:38.898840 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-09 02:42:38.898845 | orchestrator | Thursday 09 April 2026 02:40:59 +0000 (0:00:00.149) 0:01:07.649 ******** 2026-04-09 02:42:38.898849 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:42:38.898854 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:42:38.898858 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:42:38.898862 | orchestrator | 2026-04-09 02:42:38.898867 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-09 02:42:38.898871 | orchestrator | Thursday 09 April 2026 02:41:00 +0000 (0:00:00.341) 0:01:07.990 ******** 2026-04-09 02:42:38.898876 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:42:38.898880 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-09 02:42:38.898885 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:42:38.898889 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:42:38.898894 | orchestrator | 2026-04-09 02:42:38.898898 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-09 02:42:38.898903 | orchestrator | skipping: no hosts matched 2026-04-09 02:42:38.898907 | orchestrator | 2026-04-09 02:42:38.898911 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-09 02:42:38.898916 | orchestrator | 2026-04-09 02:42:38.898920 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-09 02:42:38.898924 | orchestrator | Thursday 09 April 2026 02:41:00 +0000 (0:00:00.584) 0:01:08.575 ******** 2026-04-09 02:42:38.898927 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:42:38.898931 | orchestrator | 2026-04-09 02:42:38.898935 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-09 02:42:38.898938 | orchestrator | Thursday 09 April 2026 02:41:24 +0000 (0:00:23.705) 0:01:32.280 ******** 2026-04-09 02:42:38.898942 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:42:38.898946 | orchestrator | 2026-04-09 02:42:38.898950 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-09 02:42:38.898953 | orchestrator | Thursday 09 April 2026 02:41:34 +0000 (0:00:10.527) 0:01:42.808 ******** 2026-04-09 02:42:38.898957 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:42:38.898961 | orchestrator | 2026-04-09 02:42:38.898967 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-09 02:42:38.898971 | orchestrator | 2026-04-09 02:42:38.898978 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-09 02:42:38.898982 | orchestrator | Thursday 09 April 2026 02:41:37 +0000 (0:00:02.460) 0:01:45.268 ******** 2026-04-09 02:42:38.898989 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:42:38.898993 | orchestrator | 2026-04-09 02:42:38.898997 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-09 02:42:38.899000 | orchestrator | Thursday 09 April 2026 02:41:56 +0000 (0:00:18.963) 0:02:04.232 ******** 2026-04-09 02:42:38.899004 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:42:38.899008 | orchestrator | 2026-04-09 02:42:38.899011 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-09 02:42:38.899015 | orchestrator | Thursday 09 April 2026 02:42:13 +0000 (0:00:16.652) 0:02:20.884 ******** 2026-04-09 02:42:38.899019 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:42:38.899022 | orchestrator | 2026-04-09 02:42:38.899026 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-09 02:42:38.899030 | orchestrator | 2026-04-09 02:42:38.899033 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-09 02:42:38.899037 | orchestrator | Thursday 09 April 2026 02:42:15 +0000 (0:00:02.797) 0:02:23.682 ******** 2026-04-09 02:42:38.899041 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:42:38.899044 | orchestrator | 2026-04-09 02:42:38.899048 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-09 02:42:38.899052 | orchestrator | Thursday 09 April 2026 02:42:28 +0000 (0:00:13.031) 0:02:36.714 ******** 2026-04-09 02:42:38.899055 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:42:38.899059 | orchestrator | 2026-04-09 02:42:38.899063 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-09 02:42:38.899066 | orchestrator | Thursday 09 April 2026 02:42:35 +0000 (0:00:06.567) 0:02:43.281 ******** 2026-04-09 02:42:38.899070 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:42:38.899074 | orchestrator | 2026-04-09 02:42:38.899077 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-09 02:42:38.899081 | orchestrator | 2026-04-09 02:42:38.899084 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-09 02:42:38.899088 | orchestrator | Thursday 09 April 2026 02:42:38 +0000 (0:00:02.935) 0:02:46.217 ******** 2026-04-09 02:42:38.899092 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:42:38.899168 | orchestrator | 2026-04-09 02:42:38.899172 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-09 02:42:38.899180 | orchestrator | Thursday 09 April 2026 02:42:38 +0000 (0:00:00.539) 0:02:46.757 ******** 2026-04-09 02:42:51.585725 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:42:51.585816 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:42:51.585825 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:42:51.585832 | orchestrator | 2026-04-09 02:42:51.585839 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-09 02:42:51.585846 | orchestrator | Thursday 09 April 2026 02:42:41 +0000 (0:00:02.351) 0:02:49.108 ******** 2026-04-09 02:42:51.585852 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:42:51.585858 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:42:51.585863 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:42:51.585869 | orchestrator | 2026-04-09 02:42:51.585875 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-09 02:42:51.585881 | orchestrator | Thursday 09 April 2026 02:42:43 +0000 (0:00:02.098) 0:02:51.207 ******** 2026-04-09 02:42:51.585886 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:42:51.585892 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:42:51.585898 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:42:51.585903 | orchestrator | 2026-04-09 02:42:51.585909 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-09 02:42:51.585915 | orchestrator | Thursday 09 April 2026 02:42:45 +0000 (0:00:02.353) 0:02:53.560 ******** 2026-04-09 02:42:51.585920 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:42:51.585926 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:42:51.585932 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:42:51.585957 | orchestrator | 2026-04-09 02:42:51.585963 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-09 02:42:51.585969 | orchestrator | Thursday 09 April 2026 02:42:47 +0000 (0:00:01.940) 0:02:55.501 ******** 2026-04-09 02:42:51.585975 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:42:51.585981 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:42:51.585986 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:42:51.585992 | orchestrator | 2026-04-09 02:42:51.585997 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-09 02:42:51.586003 | orchestrator | Thursday 09 April 2026 02:42:50 +0000 (0:00:03.028) 0:02:58.529 ******** 2026-04-09 02:42:51.586008 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:42:51.586038 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:42:51.586044 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:42:51.586050 | orchestrator | 2026-04-09 02:42:51.586055 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:42:51.586062 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-04-09 02:42:51.586069 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-09 02:42:51.586075 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-09 02:42:51.586080 | orchestrator | 2026-04-09 02:42:51.586086 | orchestrator | 2026-04-09 02:42:51.586092 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:42:51.586097 | orchestrator | Thursday 09 April 2026 02:42:51 +0000 (0:00:00.486) 0:02:59.016 ******** 2026-04-09 02:42:51.586121 | orchestrator | =============================================================================== 2026-04-09 02:42:51.586142 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 42.67s 2026-04-09 02:42:51.586152 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 27.18s 2026-04-09 02:42:51.586162 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 13.03s 2026-04-09 02:42:51.586171 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.87s 2026-04-09 02:42:51.586180 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.65s 2026-04-09 02:42:51.586190 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.12s 2026-04-09 02:42:51.586196 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 6.57s 2026-04-09 02:42:51.586201 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.26s 2026-04-09 02:42:51.586207 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.84s 2026-04-09 02:42:51.586212 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.22s 2026-04-09 02:42:51.586217 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.03s 2026-04-09 02:42:51.586223 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.94s 2026-04-09 02:42:51.586228 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.93s 2026-04-09 02:42:51.586234 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.86s 2026-04-09 02:42:51.586240 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.78s 2026-04-09 02:42:51.586246 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.50s 2026-04-09 02:42:51.586251 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.50s 2026-04-09 02:42:51.586256 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.46s 2026-04-09 02:42:51.586262 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.35s 2026-04-09 02:42:51.586274 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.35s 2026-04-09 02:42:54.271901 | orchestrator | 2026-04-09 02:42:54 | INFO  | Task 32101737-7fa5-4047-9de2-826db0cac524 (rabbitmq) was prepared for execution. 2026-04-09 02:42:54.272012 | orchestrator | 2026-04-09 02:42:54 | INFO  | It takes a moment until task 32101737-7fa5-4047-9de2-826db0cac524 (rabbitmq) has been started and output is visible here. 2026-04-09 02:43:08.452858 | orchestrator | 2026-04-09 02:43:08.452968 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 02:43:08.452986 | orchestrator | 2026-04-09 02:43:08.452999 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 02:43:08.453013 | orchestrator | Thursday 09 April 2026 02:42:58 +0000 (0:00:00.198) 0:00:00.198 ******** 2026-04-09 02:43:08.453025 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:43:08.453038 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:43:08.453049 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:43:08.453061 | orchestrator | 2026-04-09 02:43:08.453073 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 02:43:08.453085 | orchestrator | Thursday 09 April 2026 02:42:59 +0000 (0:00:00.316) 0:00:00.515 ******** 2026-04-09 02:43:08.453098 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-09 02:43:08.453244 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-09 02:43:08.453262 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-09 02:43:08.453275 | orchestrator | 2026-04-09 02:43:08.453287 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-09 02:43:08.453302 | orchestrator | 2026-04-09 02:43:08.453316 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-09 02:43:08.453330 | orchestrator | Thursday 09 April 2026 02:42:59 +0000 (0:00:00.599) 0:00:01.114 ******** 2026-04-09 02:43:08.453345 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:43:08.453361 | orchestrator | 2026-04-09 02:43:08.453376 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-09 02:43:08.453391 | orchestrator | Thursday 09 April 2026 02:43:00 +0000 (0:00:00.621) 0:00:01.736 ******** 2026-04-09 02:43:08.453407 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:43:08.453423 | orchestrator | 2026-04-09 02:43:08.453436 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-09 02:43:08.453449 | orchestrator | Thursday 09 April 2026 02:43:01 +0000 (0:00:00.951) 0:00:02.687 ******** 2026-04-09 02:43:08.453463 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:43:08.453477 | orchestrator | 2026-04-09 02:43:08.453488 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-09 02:43:08.453499 | orchestrator | Thursday 09 April 2026 02:43:01 +0000 (0:00:00.401) 0:00:03.089 ******** 2026-04-09 02:43:08.453512 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:43:08.453523 | orchestrator | 2026-04-09 02:43:08.453535 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-09 02:43:08.453547 | orchestrator | Thursday 09 April 2026 02:43:02 +0000 (0:00:00.487) 0:00:03.577 ******** 2026-04-09 02:43:08.453559 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:43:08.453570 | orchestrator | 2026-04-09 02:43:08.453582 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-09 02:43:08.453595 | orchestrator | Thursday 09 April 2026 02:43:02 +0000 (0:00:00.405) 0:00:03.983 ******** 2026-04-09 02:43:08.453607 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:43:08.453619 | orchestrator | 2026-04-09 02:43:08.453631 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-09 02:43:08.453662 | orchestrator | Thursday 09 April 2026 02:43:03 +0000 (0:00:00.586) 0:00:04.569 ******** 2026-04-09 02:43:08.453675 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:43:08.453716 | orchestrator | 2026-04-09 02:43:08.453729 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-09 02:43:08.453742 | orchestrator | Thursday 09 April 2026 02:43:04 +0000 (0:00:01.001) 0:00:05.570 ******** 2026-04-09 02:43:08.453754 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:43:08.453767 | orchestrator | 2026-04-09 02:43:08.453779 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-09 02:43:08.453790 | orchestrator | Thursday 09 April 2026 02:43:05 +0000 (0:00:00.917) 0:00:06.488 ******** 2026-04-09 02:43:08.453802 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:43:08.453815 | orchestrator | 2026-04-09 02:43:08.453827 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-09 02:43:08.453839 | orchestrator | Thursday 09 April 2026 02:43:05 +0000 (0:00:00.393) 0:00:06.881 ******** 2026-04-09 02:43:08.453851 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:43:08.453864 | orchestrator | 2026-04-09 02:43:08.453877 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-09 02:43:08.453892 | orchestrator | Thursday 09 April 2026 02:43:05 +0000 (0:00:00.378) 0:00:07.260 ******** 2026-04-09 02:43:08.453939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 02:43:08.453958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 02:43:08.453974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 02:43:08.453992 | orchestrator | 2026-04-09 02:43:08.454000 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-09 02:43:08.454007 | orchestrator | Thursday 09 April 2026 02:43:06 +0000 (0:00:00.830) 0:00:08.090 ******** 2026-04-09 02:43:08.454074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 02:43:08.454093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 02:43:26.997981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 02:43:26.998228 | orchestrator | 2026-04-09 02:43:26.998249 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-09 02:43:26.998282 | orchestrator | Thursday 09 April 2026 02:43:08 +0000 (0:00:01.699) 0:00:09.790 ******** 2026-04-09 02:43:26.998294 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-09 02:43:26.998307 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-09 02:43:26.998319 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-09 02:43:26.998332 | orchestrator | 2026-04-09 02:43:26.998345 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-09 02:43:26.998358 | orchestrator | Thursday 09 April 2026 02:43:09 +0000 (0:00:01.388) 0:00:11.179 ******** 2026-04-09 02:43:26.998381 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-09 02:43:26.998389 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-09 02:43:26.998395 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-09 02:43:26.998414 | orchestrator | 2026-04-09 02:43:26.998421 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-09 02:43:26.998427 | orchestrator | Thursday 09 April 2026 02:43:11 +0000 (0:00:01.660) 0:00:12.840 ******** 2026-04-09 02:43:26.998434 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-09 02:43:26.998441 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-09 02:43:26.998447 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-09 02:43:26.998454 | orchestrator | 2026-04-09 02:43:26.998460 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-09 02:43:26.998467 | orchestrator | Thursday 09 April 2026 02:43:12 +0000 (0:00:01.305) 0:00:14.145 ******** 2026-04-09 02:43:26.998473 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-09 02:43:26.998480 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-09 02:43:26.998487 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-09 02:43:26.998493 | orchestrator | 2026-04-09 02:43:26.998500 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-09 02:43:26.998507 | orchestrator | Thursday 09 April 2026 02:43:14 +0000 (0:00:01.675) 0:00:15.820 ******** 2026-04-09 02:43:26.998513 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-09 02:43:26.998521 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-09 02:43:26.998529 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-09 02:43:26.998536 | orchestrator | 2026-04-09 02:43:26.998544 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-09 02:43:26.998552 | orchestrator | Thursday 09 April 2026 02:43:15 +0000 (0:00:01.331) 0:00:17.152 ******** 2026-04-09 02:43:26.998560 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-09 02:43:26.998569 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-09 02:43:26.998576 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-09 02:43:26.998584 | orchestrator | 2026-04-09 02:43:26.998592 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-09 02:43:26.998599 | orchestrator | Thursday 09 April 2026 02:43:17 +0000 (0:00:01.392) 0:00:18.544 ******** 2026-04-09 02:43:26.998608 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:43:26.998617 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:43:26.998642 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:43:26.998661 | orchestrator | 2026-04-09 02:43:26.998669 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-04-09 02:43:26.998677 | orchestrator | Thursday 09 April 2026 02:43:17 +0000 (0:00:00.415) 0:00:18.960 ******** 2026-04-09 02:43:26.998686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 02:43:26.998700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 02:43:26.998710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 02:43:26.998718 | orchestrator | 2026-04-09 02:43:26.998726 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-09 02:43:26.998734 | orchestrator | Thursday 09 April 2026 02:43:18 +0000 (0:00:01.275) 0:00:20.235 ******** 2026-04-09 02:43:26.998741 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:43:26.998749 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:43:26.998757 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:43:26.998765 | orchestrator | 2026-04-09 02:43:26.998773 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-09 02:43:26.998786 | orchestrator | Thursday 09 April 2026 02:43:19 +0000 (0:00:00.760) 0:00:20.996 ******** 2026-04-09 02:43:26.998793 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:43:26.998801 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:43:26.998808 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:43:26.998820 | orchestrator | 2026-04-09 02:43:26.998832 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-09 02:43:26.998849 | orchestrator | Thursday 09 April 2026 02:43:26 +0000 (0:00:07.344) 0:00:28.341 ******** 2026-04-09 02:45:02.275844 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:45:02.275923 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:45:02.275931 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:45:02.275942 | orchestrator | 2026-04-09 02:45:02.275951 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-09 02:45:02.275958 | orchestrator | 2026-04-09 02:45:02.275964 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-09 02:45:02.275971 | orchestrator | Thursday 09 April 2026 02:43:27 +0000 (0:00:00.588) 0:00:28.930 ******** 2026-04-09 02:45:02.275977 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:45:02.275984 | orchestrator | 2026-04-09 02:45:02.275990 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-09 02:45:02.275997 | orchestrator | Thursday 09 April 2026 02:43:28 +0000 (0:00:00.633) 0:00:29.564 ******** 2026-04-09 02:45:02.276003 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:45:02.276010 | orchestrator | 2026-04-09 02:45:02.276017 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-09 02:45:02.276023 | orchestrator | Thursday 09 April 2026 02:43:28 +0000 (0:00:00.244) 0:00:29.808 ******** 2026-04-09 02:45:02.276029 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:45:02.276036 | orchestrator | 2026-04-09 02:45:02.276042 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-09 02:45:02.276048 | orchestrator | Thursday 09 April 2026 02:43:30 +0000 (0:00:01.638) 0:00:31.447 ******** 2026-04-09 02:45:02.276054 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:45:02.276060 | orchestrator | 2026-04-09 02:45:02.276067 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-09 02:45:02.276074 | orchestrator | 2026-04-09 02:45:02.276080 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-09 02:45:02.276088 | orchestrator | Thursday 09 April 2026 02:44:24 +0000 (0:00:54.216) 0:01:25.664 ******** 2026-04-09 02:45:02.276093 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:45:02.276096 | orchestrator | 2026-04-09 02:45:02.276101 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-09 02:45:02.276105 | orchestrator | Thursday 09 April 2026 02:44:24 +0000 (0:00:00.606) 0:01:26.270 ******** 2026-04-09 02:45:02.276109 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:45:02.276113 | orchestrator | 2026-04-09 02:45:02.276117 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-09 02:45:02.276121 | orchestrator | Thursday 09 April 2026 02:44:25 +0000 (0:00:00.248) 0:01:26.518 ******** 2026-04-09 02:45:02.276125 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:45:02.276128 | orchestrator | 2026-04-09 02:45:02.276132 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-09 02:45:02.276148 | orchestrator | Thursday 09 April 2026 02:44:31 +0000 (0:00:06.708) 0:01:33.227 ******** 2026-04-09 02:45:02.276152 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:45:02.276155 | orchestrator | 2026-04-09 02:45:02.276159 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-09 02:45:02.276163 | orchestrator | 2026-04-09 02:45:02.276216 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-09 02:45:02.276221 | orchestrator | Thursday 09 April 2026 02:44:42 +0000 (0:00:10.159) 0:01:43.386 ******** 2026-04-09 02:45:02.276224 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:45:02.276228 | orchestrator | 2026-04-09 02:45:02.276248 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-09 02:45:02.276255 | orchestrator | Thursday 09 April 2026 02:44:42 +0000 (0:00:00.766) 0:01:44.153 ******** 2026-04-09 02:45:02.276264 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:45:02.276272 | orchestrator | 2026-04-09 02:45:02.276278 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-09 02:45:02.276285 | orchestrator | Thursday 09 April 2026 02:44:43 +0000 (0:00:00.280) 0:01:44.433 ******** 2026-04-09 02:45:02.276291 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:45:02.276297 | orchestrator | 2026-04-09 02:45:02.276304 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-09 02:45:02.276311 | orchestrator | Thursday 09 April 2026 02:44:44 +0000 (0:00:01.577) 0:01:46.011 ******** 2026-04-09 02:45:02.276317 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:45:02.276324 | orchestrator | 2026-04-09 02:45:02.276331 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-09 02:45:02.276339 | orchestrator | 2026-04-09 02:45:02.276345 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-09 02:45:02.276351 | orchestrator | Thursday 09 April 2026 02:44:58 +0000 (0:00:14.281) 0:02:00.292 ******** 2026-04-09 02:45:02.276357 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:45:02.276362 | orchestrator | 2026-04-09 02:45:02.276369 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-09 02:45:02.276374 | orchestrator | Thursday 09 April 2026 02:44:59 +0000 (0:00:00.526) 0:02:00.818 ******** 2026-04-09 02:45:02.276380 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-09 02:45:02.276386 | orchestrator | enable_outward_rabbitmq_True 2026-04-09 02:45:02.276392 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-09 02:45:02.276398 | orchestrator | outward_rabbitmq_restart 2026-04-09 02:45:02.276404 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:45:02.276410 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:45:02.276416 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:45:02.276422 | orchestrator | 2026-04-09 02:45:02.276428 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-04-09 02:45:02.276435 | orchestrator | skipping: no hosts matched 2026-04-09 02:45:02.276440 | orchestrator | 2026-04-09 02:45:02.276446 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-04-09 02:45:02.276453 | orchestrator | skipping: no hosts matched 2026-04-09 02:45:02.276459 | orchestrator | 2026-04-09 02:45:02.276464 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-04-09 02:45:02.276470 | orchestrator | skipping: no hosts matched 2026-04-09 02:45:02.276476 | orchestrator | 2026-04-09 02:45:02.276483 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:45:02.276507 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-09 02:45:02.276515 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:45:02.276521 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:45:02.276527 | orchestrator | 2026-04-09 02:45:02.276533 | orchestrator | 2026-04-09 02:45:02.276540 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:45:02.276547 | orchestrator | Thursday 09 April 2026 02:45:01 +0000 (0:00:02.385) 0:02:03.204 ******** 2026-04-09 02:45:02.276553 | orchestrator | =============================================================================== 2026-04-09 02:45:02.276559 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 78.66s 2026-04-09 02:45:02.276566 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 9.93s 2026-04-09 02:45:02.276581 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.34s 2026-04-09 02:45:02.276586 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.39s 2026-04-09 02:45:02.276593 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.01s 2026-04-09 02:45:02.276600 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.70s 2026-04-09 02:45:02.276607 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.68s 2026-04-09 02:45:02.276612 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.66s 2026-04-09 02:45:02.276619 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.39s 2026-04-09 02:45:02.276625 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.39s 2026-04-09 02:45:02.276631 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.33s 2026-04-09 02:45:02.276638 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.31s 2026-04-09 02:45:02.276644 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.28s 2026-04-09 02:45:02.276652 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.00s 2026-04-09 02:45:02.276661 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.95s 2026-04-09 02:45:02.276665 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.92s 2026-04-09 02:45:02.276669 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.83s 2026-04-09 02:45:02.276673 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.77s 2026-04-09 02:45:02.276676 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.76s 2026-04-09 02:45:02.276680 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.62s 2026-04-09 02:45:04.907006 | orchestrator | 2026-04-09 02:45:04 | INFO  | Task 242ba4d0-d8ce-4a4e-a7aa-b0270822a8c5 (openvswitch) was prepared for execution. 2026-04-09 02:45:04.907142 | orchestrator | 2026-04-09 02:45:04 | INFO  | It takes a moment until task 242ba4d0-d8ce-4a4e-a7aa-b0270822a8c5 (openvswitch) has been started and output is visible here. 2026-04-09 02:45:18.628945 | orchestrator | 2026-04-09 02:45:18.629031 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 02:45:18.629039 | orchestrator | 2026-04-09 02:45:18.629043 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 02:45:18.629048 | orchestrator | Thursday 09 April 2026 02:45:09 +0000 (0:00:00.293) 0:00:00.293 ******** 2026-04-09 02:45:18.629052 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:45:18.629058 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:45:18.629062 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:45:18.629065 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:45:18.629069 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:45:18.629073 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:45:18.629077 | orchestrator | 2026-04-09 02:45:18.629081 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 02:45:18.629085 | orchestrator | Thursday 09 April 2026 02:45:10 +0000 (0:00:00.699) 0:00:00.993 ******** 2026-04-09 02:45:18.629089 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 02:45:18.629094 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 02:45:18.629097 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 02:45:18.629101 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 02:45:18.629105 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 02:45:18.629109 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 02:45:18.629130 | orchestrator | 2026-04-09 02:45:18.629136 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-09 02:45:18.629142 | orchestrator | 2026-04-09 02:45:18.629152 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-09 02:45:18.629158 | orchestrator | Thursday 09 April 2026 02:45:11 +0000 (0:00:00.707) 0:00:01.700 ******** 2026-04-09 02:45:18.629168 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 02:45:18.629175 | orchestrator | 2026-04-09 02:45:18.629226 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-09 02:45:18.629232 | orchestrator | Thursday 09 April 2026 02:45:12 +0000 (0:00:01.222) 0:00:02.923 ******** 2026-04-09 02:45:18.629238 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-09 02:45:18.629244 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-09 02:45:18.629250 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-09 02:45:18.629255 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-09 02:45:18.629261 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-09 02:45:18.629268 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-09 02:45:18.629274 | orchestrator | 2026-04-09 02:45:18.629280 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-09 02:45:18.629287 | orchestrator | Thursday 09 April 2026 02:45:13 +0000 (0:00:01.248) 0:00:04.172 ******** 2026-04-09 02:45:18.629293 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-09 02:45:18.629299 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-09 02:45:18.629306 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-09 02:45:18.629312 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-09 02:45:18.629318 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-09 02:45:18.629325 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-09 02:45:18.629331 | orchestrator | 2026-04-09 02:45:18.629336 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-09 02:45:18.629340 | orchestrator | Thursday 09 April 2026 02:45:15 +0000 (0:00:01.546) 0:00:05.719 ******** 2026-04-09 02:45:18.629346 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-09 02:45:18.629355 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:45:18.629364 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-09 02:45:18.629370 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:45:18.629376 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-09 02:45:18.629382 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:45:18.629388 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-09 02:45:18.629394 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:45:18.629400 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-09 02:45:18.629406 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:45:18.629412 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-09 02:45:18.629419 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:45:18.629425 | orchestrator | 2026-04-09 02:45:18.629432 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-09 02:45:18.629439 | orchestrator | Thursday 09 April 2026 02:45:16 +0000 (0:00:01.257) 0:00:06.976 ******** 2026-04-09 02:45:18.629446 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:45:18.629453 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:45:18.629457 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:45:18.629461 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:45:18.629465 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:45:18.629469 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:45:18.629472 | orchestrator | 2026-04-09 02:45:18.629476 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-09 02:45:18.629488 | orchestrator | Thursday 09 April 2026 02:45:17 +0000 (0:00:00.827) 0:00:07.803 ******** 2026-04-09 02:45:18.629509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 02:45:18.629518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 02:45:18.629523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 02:45:18.629586 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 02:45:18.629610 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 02:45:18.629624 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 02:45:21.142499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 02:45:21.142572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 02:45:21.142579 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 02:45:21.142583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 02:45:21.142600 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 02:45:21.142638 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 02:45:21.142646 | orchestrator | 2026-04-09 02:45:21.142653 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-09 02:45:21.142661 | orchestrator | Thursday 09 April 2026 02:45:18 +0000 (0:00:01.556) 0:00:09.360 ******** 2026-04-09 02:45:21.142668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 02:45:21.142676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 02:45:21.142683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 02:45:21.142690 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 02:45:21.142707 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 02:45:21.142720 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 02:45:23.940765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 02:45:23.940879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 02:45:23.940897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 02:45:23.940926 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 02:45:23.940958 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 02:45:23.940988 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 02:45:23.941001 | orchestrator | 2026-04-09 02:45:23.941014 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-09 02:45:23.941027 | orchestrator | Thursday 09 April 2026 02:45:21 +0000 (0:00:02.517) 0:00:11.878 ******** 2026-04-09 02:45:23.941047 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:45:23.941069 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:45:23.941090 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:45:23.941111 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:45:23.941123 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:45:23.941133 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:45:23.941145 | orchestrator | 2026-04-09 02:45:23.941156 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-04-09 02:45:23.941167 | orchestrator | Thursday 09 April 2026 02:45:22 +0000 (0:00:01.001) 0:00:12.879 ******** 2026-04-09 02:45:23.941179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 02:45:23.941245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 02:45:23.941274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 02:45:23.941286 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 02:45:23.941310 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 02:45:49.258452 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 02:45:49.259473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 02:45:49.259515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 02:45:49.259554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 02:45:49.259562 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 02:45:49.259585 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 02:45:49.259592 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 02:45:49.259598 | orchestrator | 2026-04-09 02:45:49.259605 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 02:45:49.259613 | orchestrator | Thursday 09 April 2026 02:45:24 +0000 (0:00:01.818) 0:00:14.698 ******** 2026-04-09 02:45:49.259619 | orchestrator | 2026-04-09 02:45:49.259625 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 02:45:49.259630 | orchestrator | Thursday 09 April 2026 02:45:24 +0000 (0:00:00.392) 0:00:15.090 ******** 2026-04-09 02:45:49.259642 | orchestrator | 2026-04-09 02:45:49.259647 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 02:45:49.259653 | orchestrator | Thursday 09 April 2026 02:45:24 +0000 (0:00:00.141) 0:00:15.232 ******** 2026-04-09 02:45:49.259659 | orchestrator | 2026-04-09 02:45:49.259665 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 02:45:49.259671 | orchestrator | Thursday 09 April 2026 02:45:24 +0000 (0:00:00.157) 0:00:15.389 ******** 2026-04-09 02:45:49.259676 | orchestrator | 2026-04-09 02:45:49.259682 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 02:45:49.259688 | orchestrator | Thursday 09 April 2026 02:45:24 +0000 (0:00:00.158) 0:00:15.547 ******** 2026-04-09 02:45:49.259694 | orchestrator | 2026-04-09 02:45:49.259700 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 02:45:49.259705 | orchestrator | Thursday 09 April 2026 02:45:25 +0000 (0:00:00.136) 0:00:15.684 ******** 2026-04-09 02:45:49.259711 | orchestrator | 2026-04-09 02:45:49.259717 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-09 02:45:49.259723 | orchestrator | Thursday 09 April 2026 02:45:25 +0000 (0:00:00.136) 0:00:15.821 ******** 2026-04-09 02:45:49.259729 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:45:49.259736 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:45:49.259742 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:45:49.259748 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:45:49.259754 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:45:49.259759 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:45:49.259765 | orchestrator | 2026-04-09 02:45:49.259771 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-09 02:45:49.259778 | orchestrator | Thursday 09 April 2026 02:45:33 +0000 (0:00:08.782) 0:00:24.604 ******** 2026-04-09 02:45:49.259787 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:45:49.259794 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:45:49.259800 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:45:49.259805 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:45:49.259811 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:45:49.259817 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:45:49.259823 | orchestrator | 2026-04-09 02:45:49.259829 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-09 02:45:49.259835 | orchestrator | Thursday 09 April 2026 02:45:35 +0000 (0:00:01.115) 0:00:25.720 ******** 2026-04-09 02:45:49.259840 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:45:49.259846 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:45:49.259852 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:45:49.259858 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:45:49.259864 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:45:49.259869 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:45:49.259875 | orchestrator | 2026-04-09 02:45:49.259881 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-09 02:45:49.259887 | orchestrator | Thursday 09 April 2026 02:45:43 +0000 (0:00:08.114) 0:00:33.834 ******** 2026-04-09 02:45:49.259893 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-09 02:45:49.259899 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-09 02:45:49.259904 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-09 02:45:49.259910 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-09 02:45:49.259916 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-09 02:45:49.259922 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-09 02:45:49.259928 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-09 02:45:49.259941 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-09 02:46:02.506829 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-09 02:46:02.506945 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-09 02:46:02.506961 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-09 02:46:02.506973 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-09 02:46:02.506984 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-09 02:46:02.506995 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-09 02:46:02.507006 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-09 02:46:02.507017 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-09 02:46:02.507028 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-09 02:46:02.507039 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-09 02:46:02.507050 | orchestrator | 2026-04-09 02:46:02.507062 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-09 02:46:02.507074 | orchestrator | Thursday 09 April 2026 02:45:49 +0000 (0:00:06.057) 0:00:39.891 ******** 2026-04-09 02:46:02.507086 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-09 02:46:02.507098 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:46:02.507110 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-09 02:46:02.507121 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:46:02.507132 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-09 02:46:02.507143 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:46:02.507153 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-04-09 02:46:02.507164 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-04-09 02:46:02.507175 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-04-09 02:46:02.507186 | orchestrator | 2026-04-09 02:46:02.507197 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-09 02:46:02.507208 | orchestrator | Thursday 09 April 2026 02:45:51 +0000 (0:00:02.427) 0:00:42.319 ******** 2026-04-09 02:46:02.507219 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-09 02:46:02.507230 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:46:02.507241 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-09 02:46:02.507252 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:46:02.507263 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-09 02:46:02.507358 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:46:02.507374 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-09 02:46:02.507388 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-09 02:46:02.507418 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-09 02:46:02.507432 | orchestrator | 2026-04-09 02:46:02.507445 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-09 02:46:02.507458 | orchestrator | Thursday 09 April 2026 02:45:54 +0000 (0:00:03.208) 0:00:45.527 ******** 2026-04-09 02:46:02.507471 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:46:02.507484 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:46:02.507521 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:46:02.507534 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:46:02.507547 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:46:02.507560 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:46:02.507573 | orchestrator | 2026-04-09 02:46:02.507586 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:46:02.507601 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 02:46:02.507615 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 02:46:02.507629 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 02:46:02.507643 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 02:46:02.507655 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 02:46:02.507668 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 02:46:02.507681 | orchestrator | 2026-04-09 02:46:02.507694 | orchestrator | 2026-04-09 02:46:02.507708 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:46:02.507721 | orchestrator | Thursday 09 April 2026 02:46:02 +0000 (0:00:07.170) 0:00:52.698 ******** 2026-04-09 02:46:02.507754 | orchestrator | =============================================================================== 2026-04-09 02:46:02.507766 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.29s 2026-04-09 02:46:02.507776 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.78s 2026-04-09 02:46:02.507787 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.06s 2026-04-09 02:46:02.507806 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.21s 2026-04-09 02:46:02.507826 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.52s 2026-04-09 02:46:02.507845 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.43s 2026-04-09 02:46:02.507865 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.82s 2026-04-09 02:46:02.507885 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.56s 2026-04-09 02:46:02.507905 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.55s 2026-04-09 02:46:02.507923 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.26s 2026-04-09 02:46:02.507940 | orchestrator | module-load : Load modules ---------------------------------------------- 1.25s 2026-04-09 02:46:02.507957 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.22s 2026-04-09 02:46:02.507979 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.12s 2026-04-09 02:46:02.507996 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.12s 2026-04-09 02:46:02.508015 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.00s 2026-04-09 02:46:02.508034 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.83s 2026-04-09 02:46:02.508054 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.71s 2026-04-09 02:46:02.508074 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.70s 2026-04-09 02:46:05.155711 | orchestrator | 2026-04-09 02:46:05 | INFO  | Task 33d3b75e-4b18-46aa-a6fe-048a38fa5f6a (ovn) was prepared for execution. 2026-04-09 02:46:05.155819 | orchestrator | 2026-04-09 02:46:05 | INFO  | It takes a moment until task 33d3b75e-4b18-46aa-a6fe-048a38fa5f6a (ovn) has been started and output is visible here. 2026-04-09 02:46:16.301322 | orchestrator | 2026-04-09 02:46:16.301414 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 02:46:16.301426 | orchestrator | 2026-04-09 02:46:16.301433 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 02:46:16.301440 | orchestrator | Thursday 09 April 2026 02:46:09 +0000 (0:00:00.193) 0:00:00.193 ******** 2026-04-09 02:46:16.301447 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:46:16.301456 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:46:16.301464 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:46:16.301471 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:46:16.301478 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:46:16.301483 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:46:16.301487 | orchestrator | 2026-04-09 02:46:16.301491 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 02:46:16.301506 | orchestrator | Thursday 09 April 2026 02:46:10 +0000 (0:00:00.714) 0:00:00.908 ******** 2026-04-09 02:46:16.301510 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-09 02:46:16.301515 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-09 02:46:16.301519 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-09 02:46:16.301523 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-09 02:46:16.301527 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-09 02:46:16.301531 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-09 02:46:16.301535 | orchestrator | 2026-04-09 02:46:16.301540 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-09 02:46:16.301543 | orchestrator | 2026-04-09 02:46:16.301547 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-09 02:46:16.301551 | orchestrator | Thursday 09 April 2026 02:46:11 +0000 (0:00:00.882) 0:00:01.791 ******** 2026-04-09 02:46:16.301556 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:46:16.301561 | orchestrator | 2026-04-09 02:46:16.301565 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-09 02:46:16.301568 | orchestrator | Thursday 09 April 2026 02:46:12 +0000 (0:00:01.183) 0:00:02.975 ******** 2026-04-09 02:46:16.301573 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:16.301579 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:16.301583 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:16.301587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:16.301605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:16.301620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:16.301624 | orchestrator | 2026-04-09 02:46:16.301628 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-09 02:46:16.301632 | orchestrator | Thursday 09 April 2026 02:46:13 +0000 (0:00:01.210) 0:00:04.185 ******** 2026-04-09 02:46:16.301639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:16.301644 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:16.301648 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:16.301652 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:16.301656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:16.301660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:16.301668 | orchestrator | 2026-04-09 02:46:16.301672 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-09 02:46:16.301676 | orchestrator | Thursday 09 April 2026 02:46:15 +0000 (0:00:01.613) 0:00:05.799 ******** 2026-04-09 02:46:16.301680 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:16.301684 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:16.301692 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:41.303107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:41.304147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:41.304208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:41.304220 | orchestrator | 2026-04-09 02:46:41.304231 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-09 02:46:41.304240 | orchestrator | Thursday 09 April 2026 02:46:16 +0000 (0:00:01.188) 0:00:06.987 ******** 2026-04-09 02:46:41.304249 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:41.304282 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:41.304291 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:41.304299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:41.304307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:41.304337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:41.304370 | orchestrator | 2026-04-09 02:46:41.304383 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-04-09 02:46:41.304397 | orchestrator | Thursday 09 April 2026 02:46:17 +0000 (0:00:01.545) 0:00:08.533 ******** 2026-04-09 02:46:41.304423 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:41.304437 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:41.304452 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:41.304462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:41.304478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:41.304487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:46:41.304495 | orchestrator | 2026-04-09 02:46:41.304503 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-09 02:46:41.304511 | orchestrator | Thursday 09 April 2026 02:46:19 +0000 (0:00:01.402) 0:00:09.936 ******** 2026-04-09 02:46:41.304520 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:46:41.304529 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:46:41.304537 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:46:41.304545 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:46:41.304552 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:46:41.304560 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:46:41.304568 | orchestrator | 2026-04-09 02:46:41.304577 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-09 02:46:41.304585 | orchestrator | Thursday 09 April 2026 02:46:21 +0000 (0:00:02.587) 0:00:12.523 ******** 2026-04-09 02:46:41.304593 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-09 02:46:41.304602 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-09 02:46:41.304610 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-09 02:46:41.304617 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-09 02:46:41.304625 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-09 02:46:41.304633 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-09 02:46:41.304648 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-09 02:47:21.496648 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-09 02:47:21.496741 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-09 02:47:21.496767 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-09 02:47:21.496776 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-09 02:47:21.496784 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-09 02:47:21.496792 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-09 02:47:21.496802 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-09 02:47:21.496831 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-09 02:47:21.496840 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-09 02:47:21.496848 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-09 02:47:21.496856 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-09 02:47:21.496865 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-09 02:47:21.496874 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-09 02:47:21.496881 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-09 02:47:21.496889 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-09 02:47:21.496898 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-09 02:47:21.496906 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-09 02:47:21.496913 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-09 02:47:21.496921 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-09 02:47:21.496929 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-09 02:47:21.496936 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-09 02:47:21.496944 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-09 02:47:21.496952 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-09 02:47:21.496960 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-09 02:47:21.496974 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-09 02:47:21.496989 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-09 02:47:21.497004 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-09 02:47:21.497020 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-09 02:47:21.497034 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-09 02:47:21.497049 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-09 02:47:21.497064 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-09 02:47:21.497080 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-09 02:47:21.497096 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-09 02:47:21.497112 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-09 02:47:21.497126 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-09 02:47:21.497141 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-09 02:47:21.497185 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-09 02:47:21.497201 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-09 02:47:21.497222 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-09 02:47:21.497237 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-09 02:47:21.497251 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-09 02:47:21.497265 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-09 02:47:21.497280 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-09 02:47:21.497295 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-09 02:47:21.497310 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-09 02:47:21.497324 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-09 02:47:21.497338 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-09 02:47:21.497375 | orchestrator | 2026-04-09 02:47:21.497389 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-09 02:47:21.497403 | orchestrator | Thursday 09 April 2026 02:46:40 +0000 (0:00:18.833) 0:00:31.356 ******** 2026-04-09 02:47:21.497415 | orchestrator | 2026-04-09 02:47:21.497429 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-09 02:47:21.497442 | orchestrator | Thursday 09 April 2026 02:46:40 +0000 (0:00:00.264) 0:00:31.620 ******** 2026-04-09 02:47:21.497456 | orchestrator | 2026-04-09 02:47:21.497469 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-09 02:47:21.497482 | orchestrator | Thursday 09 April 2026 02:46:40 +0000 (0:00:00.065) 0:00:31.685 ******** 2026-04-09 02:47:21.497496 | orchestrator | 2026-04-09 02:47:21.497509 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-09 02:47:21.497523 | orchestrator | Thursday 09 April 2026 02:46:41 +0000 (0:00:00.086) 0:00:31.772 ******** 2026-04-09 02:47:21.497535 | orchestrator | 2026-04-09 02:47:21.497548 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-09 02:47:21.497561 | orchestrator | Thursday 09 April 2026 02:46:41 +0000 (0:00:00.070) 0:00:31.842 ******** 2026-04-09 02:47:21.497574 | orchestrator | 2026-04-09 02:47:21.497588 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-09 02:47:21.497601 | orchestrator | Thursday 09 April 2026 02:46:41 +0000 (0:00:00.077) 0:00:31.920 ******** 2026-04-09 02:47:21.497615 | orchestrator | 2026-04-09 02:47:21.497628 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-04-09 02:47:21.497642 | orchestrator | Thursday 09 April 2026 02:46:41 +0000 (0:00:00.064) 0:00:31.984 ******** 2026-04-09 02:47:21.497655 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:47:21.497669 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:47:21.497683 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:47:21.497696 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:47:21.497709 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:47:21.497722 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:47:21.497735 | orchestrator | 2026-04-09 02:47:21.497748 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-09 02:47:21.497762 | orchestrator | Thursday 09 April 2026 02:46:42 +0000 (0:00:01.567) 0:00:33.552 ******** 2026-04-09 02:47:21.497785 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:47:21.497800 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:47:21.497814 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:47:21.497827 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:47:21.497840 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:47:21.497853 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:47:21.497866 | orchestrator | 2026-04-09 02:47:21.497880 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-09 02:47:21.497893 | orchestrator | 2026-04-09 02:47:21.497906 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-09 02:47:21.497919 | orchestrator | Thursday 09 April 2026 02:47:19 +0000 (0:00:36.167) 0:01:09.719 ******** 2026-04-09 02:47:21.497933 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:47:21.497946 | orchestrator | 2026-04-09 02:47:21.497959 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-09 02:47:21.497973 | orchestrator | Thursday 09 April 2026 02:47:19 +0000 (0:00:00.776) 0:01:10.496 ******** 2026-04-09 02:47:21.497986 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:47:21.498000 | orchestrator | 2026-04-09 02:47:21.498013 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-09 02:47:21.498094 | orchestrator | Thursday 09 April 2026 02:47:20 +0000 (0:00:00.578) 0:01:11.074 ******** 2026-04-09 02:47:21.498110 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:47:21.498123 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:47:21.498138 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:47:21.498153 | orchestrator | 2026-04-09 02:47:21.498167 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-09 02:47:21.498191 | orchestrator | Thursday 09 April 2026 02:47:21 +0000 (0:00:01.104) 0:01:12.179 ******** 2026-04-09 02:47:33.233885 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:47:33.233975 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:47:33.233983 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:47:33.233989 | orchestrator | 2026-04-09 02:47:33.233996 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-09 02:47:33.234057 | orchestrator | Thursday 09 April 2026 02:47:21 +0000 (0:00:00.350) 0:01:12.530 ******** 2026-04-09 02:47:33.234064 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:47:33.234070 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:47:33.234075 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:47:33.234080 | orchestrator | 2026-04-09 02:47:33.234086 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-09 02:47:33.234091 | orchestrator | Thursday 09 April 2026 02:47:22 +0000 (0:00:00.365) 0:01:12.895 ******** 2026-04-09 02:47:33.234096 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:47:33.234101 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:47:33.234107 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:47:33.234112 | orchestrator | 2026-04-09 02:47:33.234117 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-09 02:47:33.234123 | orchestrator | Thursday 09 April 2026 02:47:22 +0000 (0:00:00.373) 0:01:13.269 ******** 2026-04-09 02:47:33.234128 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:47:33.234133 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:47:33.234138 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:47:33.234143 | orchestrator | 2026-04-09 02:47:33.234149 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-09 02:47:33.234154 | orchestrator | Thursday 09 April 2026 02:47:23 +0000 (0:00:00.551) 0:01:13.820 ******** 2026-04-09 02:47:33.234159 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:47:33.234165 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:47:33.234170 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:47:33.234175 | orchestrator | 2026-04-09 02:47:33.234181 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-09 02:47:33.234204 | orchestrator | Thursday 09 April 2026 02:47:23 +0000 (0:00:00.298) 0:01:14.118 ******** 2026-04-09 02:47:33.234209 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:47:33.234215 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:47:33.234220 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:47:33.234225 | orchestrator | 2026-04-09 02:47:33.234230 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-09 02:47:33.234235 | orchestrator | Thursday 09 April 2026 02:47:23 +0000 (0:00:00.349) 0:01:14.468 ******** 2026-04-09 02:47:33.234240 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:47:33.234245 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:47:33.234250 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:47:33.234255 | orchestrator | 2026-04-09 02:47:33.234260 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-09 02:47:33.234265 | orchestrator | Thursday 09 April 2026 02:47:24 +0000 (0:00:00.349) 0:01:14.817 ******** 2026-04-09 02:47:33.234271 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:47:33.234276 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:47:33.234281 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:47:33.234286 | orchestrator | 2026-04-09 02:47:33.234291 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-09 02:47:33.234296 | orchestrator | Thursday 09 April 2026 02:47:24 +0000 (0:00:00.315) 0:01:15.133 ******** 2026-04-09 02:47:33.234301 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:47:33.234307 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:47:33.234312 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:47:33.234317 | orchestrator | 2026-04-09 02:47:33.234322 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-09 02:47:33.234327 | orchestrator | Thursday 09 April 2026 02:47:24 +0000 (0:00:00.541) 0:01:15.675 ******** 2026-04-09 02:47:33.234332 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:47:33.234337 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:47:33.234389 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:47:33.234395 | orchestrator | 2026-04-09 02:47:33.234400 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-09 02:47:33.234405 | orchestrator | Thursday 09 April 2026 02:47:25 +0000 (0:00:00.315) 0:01:15.990 ******** 2026-04-09 02:47:33.234410 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:47:33.234415 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:47:33.234420 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:47:33.234425 | orchestrator | 2026-04-09 02:47:33.234430 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-09 02:47:33.234435 | orchestrator | Thursday 09 April 2026 02:47:25 +0000 (0:00:00.313) 0:01:16.303 ******** 2026-04-09 02:47:33.234441 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:47:33.234448 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:47:33.234454 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:47:33.234460 | orchestrator | 2026-04-09 02:47:33.234466 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-09 02:47:33.234472 | orchestrator | Thursday 09 April 2026 02:47:25 +0000 (0:00:00.317) 0:01:16.621 ******** 2026-04-09 02:47:33.234477 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:47:33.234486 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:47:33.234495 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:47:33.234507 | orchestrator | 2026-04-09 02:47:33.234520 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-09 02:47:33.234528 | orchestrator | Thursday 09 April 2026 02:47:26 +0000 (0:00:00.522) 0:01:17.143 ******** 2026-04-09 02:47:33.234536 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:47:33.234545 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:47:33.234553 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:47:33.234561 | orchestrator | 2026-04-09 02:47:33.234570 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-09 02:47:33.234585 | orchestrator | Thursday 09 April 2026 02:47:26 +0000 (0:00:00.310) 0:01:17.454 ******** 2026-04-09 02:47:33.234594 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:47:33.234603 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:47:33.234611 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:47:33.234619 | orchestrator | 2026-04-09 02:47:33.234627 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-09 02:47:33.234635 | orchestrator | Thursday 09 April 2026 02:47:27 +0000 (0:00:00.313) 0:01:17.767 ******** 2026-04-09 02:47:33.234659 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:47:33.234669 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:47:33.234678 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:47:33.234687 | orchestrator | 2026-04-09 02:47:33.234696 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-09 02:47:33.234714 | orchestrator | Thursday 09 April 2026 02:47:27 +0000 (0:00:00.351) 0:01:18.118 ******** 2026-04-09 02:47:33.234727 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:47:33.234735 | orchestrator | 2026-04-09 02:47:33.234743 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-04-09 02:47:33.234752 | orchestrator | Thursday 09 April 2026 02:47:28 +0000 (0:00:00.831) 0:01:18.949 ******** 2026-04-09 02:47:33.234760 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:47:33.234768 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:47:33.234777 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:47:33.234785 | orchestrator | 2026-04-09 02:47:33.234793 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-04-09 02:47:33.234802 | orchestrator | Thursday 09 April 2026 02:47:28 +0000 (0:00:00.471) 0:01:19.421 ******** 2026-04-09 02:47:33.234810 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:47:33.234819 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:47:33.234827 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:47:33.234836 | orchestrator | 2026-04-09 02:47:33.234844 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-04-09 02:47:33.234853 | orchestrator | Thursday 09 April 2026 02:47:29 +0000 (0:00:00.476) 0:01:19.897 ******** 2026-04-09 02:47:33.234861 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:47:33.234869 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:47:33.234877 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:47:33.234885 | orchestrator | 2026-04-09 02:47:33.234894 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-04-09 02:47:33.234903 | orchestrator | Thursday 09 April 2026 02:47:29 +0000 (0:00:00.356) 0:01:20.254 ******** 2026-04-09 02:47:33.234911 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:47:33.234920 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:47:33.234929 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:47:33.234937 | orchestrator | 2026-04-09 02:47:33.234946 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-04-09 02:47:33.234955 | orchestrator | Thursday 09 April 2026 02:47:30 +0000 (0:00:00.587) 0:01:20.841 ******** 2026-04-09 02:47:33.234961 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:47:33.234966 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:47:33.234971 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:47:33.234976 | orchestrator | 2026-04-09 02:47:33.234981 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-04-09 02:47:33.234986 | orchestrator | Thursday 09 April 2026 02:47:30 +0000 (0:00:00.348) 0:01:21.190 ******** 2026-04-09 02:47:33.234991 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:47:33.234996 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:47:33.235001 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:47:33.235006 | orchestrator | 2026-04-09 02:47:33.235011 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-04-09 02:47:33.235016 | orchestrator | Thursday 09 April 2026 02:47:30 +0000 (0:00:00.355) 0:01:21.545 ******** 2026-04-09 02:47:33.235030 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:47:33.235035 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:47:33.235040 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:47:33.235045 | orchestrator | 2026-04-09 02:47:33.235051 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-04-09 02:47:33.235056 | orchestrator | Thursday 09 April 2026 02:47:31 +0000 (0:00:00.336) 0:01:21.882 ******** 2026-04-09 02:47:33.235061 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:47:33.235066 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:47:33.235071 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:47:33.235076 | orchestrator | 2026-04-09 02:47:33.235081 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-09 02:47:33.235086 | orchestrator | Thursday 09 April 2026 02:47:31 +0000 (0:00:00.564) 0:01:22.446 ******** 2026-04-09 02:47:33.235093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:33.235101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:33.235106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:33.235122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:39.580508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:39.580625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:39.580640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:39.580649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:39.580680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:39.580690 | orchestrator | 2026-04-09 02:47:39.580701 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-09 02:47:39.580711 | orchestrator | Thursday 09 April 2026 02:47:33 +0000 (0:00:01.475) 0:01:23.922 ******** 2026-04-09 02:47:39.580722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:39.580733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:39.580741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:39.580750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:39.580790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:39.580801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:39.580811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:39.580820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:39.580837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:39.580851 | orchestrator | 2026-04-09 02:47:39.580865 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-09 02:47:39.580888 | orchestrator | Thursday 09 April 2026 02:47:37 +0000 (0:00:03.909) 0:01:27.832 ******** 2026-04-09 02:47:39.580902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:39.580917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:39.580931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:39.580945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:39.580960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:39.580991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:58.547021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:58.547165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:58.547182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:58.547195 | orchestrator | 2026-04-09 02:47:58.547208 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-09 02:47:58.547221 | orchestrator | Thursday 09 April 2026 02:47:39 +0000 (0:00:02.007) 0:01:29.839 ******** 2026-04-09 02:47:58.547232 | orchestrator | 2026-04-09 02:47:58.547243 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-09 02:47:58.547254 | orchestrator | Thursday 09 April 2026 02:47:39 +0000 (0:00:00.067) 0:01:29.907 ******** 2026-04-09 02:47:58.547265 | orchestrator | 2026-04-09 02:47:58.547275 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-09 02:47:58.547286 | orchestrator | Thursday 09 April 2026 02:47:39 +0000 (0:00:00.282) 0:01:30.189 ******** 2026-04-09 02:47:58.547297 | orchestrator | 2026-04-09 02:47:58.547307 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-09 02:47:58.547318 | orchestrator | Thursday 09 April 2026 02:47:39 +0000 (0:00:00.069) 0:01:30.259 ******** 2026-04-09 02:47:58.547329 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:47:58.547341 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:47:58.547384 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:47:58.547395 | orchestrator | 2026-04-09 02:47:58.547406 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-09 02:47:58.547417 | orchestrator | Thursday 09 April 2026 02:47:42 +0000 (0:00:02.501) 0:01:32.760 ******** 2026-04-09 02:47:58.547427 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:47:58.547438 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:47:58.547449 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:47:58.547460 | orchestrator | 2026-04-09 02:47:58.547470 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-09 02:47:58.547481 | orchestrator | Thursday 09 April 2026 02:47:44 +0000 (0:00:02.594) 0:01:35.354 ******** 2026-04-09 02:47:58.547492 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:47:58.547503 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:47:58.547513 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:47:58.547524 | orchestrator | 2026-04-09 02:47:58.547538 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-09 02:47:58.547550 | orchestrator | Thursday 09 April 2026 02:47:51 +0000 (0:00:06.606) 0:01:41.960 ******** 2026-04-09 02:47:58.547563 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:47:58.547575 | orchestrator | 2026-04-09 02:47:58.547587 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-09 02:47:58.547601 | orchestrator | Thursday 09 April 2026 02:47:51 +0000 (0:00:00.129) 0:01:42.090 ******** 2026-04-09 02:47:58.547614 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:47:58.547627 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:47:58.547639 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:47:58.547651 | orchestrator | 2026-04-09 02:47:58.547663 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-09 02:47:58.547677 | orchestrator | Thursday 09 April 2026 02:47:52 +0000 (0:00:01.070) 0:01:43.161 ******** 2026-04-09 02:47:58.547691 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:47:58.547726 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:47:58.547744 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:47:58.547762 | orchestrator | 2026-04-09 02:47:58.547781 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-09 02:47:58.547799 | orchestrator | Thursday 09 April 2026 02:47:53 +0000 (0:00:00.645) 0:01:43.806 ******** 2026-04-09 02:47:58.547819 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:47:58.547839 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:47:58.547859 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:47:58.547877 | orchestrator | 2026-04-09 02:47:58.547896 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-09 02:47:58.547933 | orchestrator | Thursday 09 April 2026 02:47:53 +0000 (0:00:00.806) 0:01:44.613 ******** 2026-04-09 02:47:58.547953 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:47:58.547972 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:47:58.547991 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:47:58.548006 | orchestrator | 2026-04-09 02:47:58.548017 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-09 02:47:58.548028 | orchestrator | Thursday 09 April 2026 02:47:54 +0000 (0:00:00.635) 0:01:45.249 ******** 2026-04-09 02:47:58.548039 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:47:58.548049 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:47:58.548080 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:47:58.548092 | orchestrator | 2026-04-09 02:47:58.548103 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-09 02:47:58.548113 | orchestrator | Thursday 09 April 2026 02:47:55 +0000 (0:00:01.401) 0:01:46.651 ******** 2026-04-09 02:47:58.548124 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:47:58.548135 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:47:58.548145 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:47:58.548155 | orchestrator | 2026-04-09 02:47:58.548167 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-04-09 02:47:58.548177 | orchestrator | Thursday 09 April 2026 02:47:56 +0000 (0:00:00.785) 0:01:47.436 ******** 2026-04-09 02:47:58.548188 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:47:58.548199 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:47:58.548209 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:47:58.548219 | orchestrator | 2026-04-09 02:47:58.548230 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-09 02:47:58.548241 | orchestrator | Thursday 09 April 2026 02:47:57 +0000 (0:00:00.335) 0:01:47.771 ******** 2026-04-09 02:47:58.548253 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:58.548294 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:58.548306 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:58.548317 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:58.548338 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:58.548418 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:58.548431 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:58.548448 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:47:58.548471 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:48:05.880211 | orchestrator | 2026-04-09 02:48:05.880405 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-09 02:48:05.880439 | orchestrator | Thursday 09 April 2026 02:47:58 +0000 (0:00:01.454) 0:01:49.226 ******** 2026-04-09 02:48:05.880463 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:48:05.880485 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:48:05.880534 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:48:05.880553 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:48:05.880608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:48:05.880628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:48:05.880645 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:48:05.880663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:48:05.880699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:48:05.880719 | orchestrator | 2026-04-09 02:48:05.880743 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-09 02:48:05.880765 | orchestrator | Thursday 09 April 2026 02:48:02 +0000 (0:00:03.979) 0:01:53.206 ******** 2026-04-09 02:48:05.880814 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:48:05.880838 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:48:05.880864 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:48:05.880888 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:48:05.880923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:48:05.880939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:48:05.880956 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:48:05.880975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:48:05.881001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 02:48:05.881018 | orchestrator | 2026-04-09 02:48:05.881032 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-09 02:48:05.881047 | orchestrator | Thursday 09 April 2026 02:48:05 +0000 (0:00:03.127) 0:01:56.333 ******** 2026-04-09 02:48:05.881062 | orchestrator | 2026-04-09 02:48:05.881075 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-09 02:48:05.881090 | orchestrator | Thursday 09 April 2026 02:48:05 +0000 (0:00:00.063) 0:01:56.397 ******** 2026-04-09 02:48:05.881103 | orchestrator | 2026-04-09 02:48:05.881117 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-09 02:48:05.881130 | orchestrator | Thursday 09 April 2026 02:48:05 +0000 (0:00:00.085) 0:01:56.483 ******** 2026-04-09 02:48:05.881145 | orchestrator | 2026-04-09 02:48:05.881168 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-09 02:48:30.488221 | orchestrator | Thursday 09 April 2026 02:48:05 +0000 (0:00:00.073) 0:01:56.556 ******** 2026-04-09 02:48:30.488303 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:48:30.488310 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:48:30.488315 | orchestrator | 2026-04-09 02:48:30.488320 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-09 02:48:30.488324 | orchestrator | Thursday 09 April 2026 02:48:12 +0000 (0:00:06.251) 0:02:02.807 ******** 2026-04-09 02:48:30.488329 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:48:30.488333 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:48:30.488337 | orchestrator | 2026-04-09 02:48:30.488387 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-09 02:48:30.488392 | orchestrator | Thursday 09 April 2026 02:48:18 +0000 (0:00:06.296) 0:02:09.103 ******** 2026-04-09 02:48:30.488396 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:48:30.488399 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:48:30.488403 | orchestrator | 2026-04-09 02:48:30.488407 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-09 02:48:30.488411 | orchestrator | Thursday 09 April 2026 02:48:24 +0000 (0:00:06.281) 0:02:15.385 ******** 2026-04-09 02:48:30.488415 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:48:30.488419 | orchestrator | 2026-04-09 02:48:30.488423 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-09 02:48:30.488427 | orchestrator | Thursday 09 April 2026 02:48:24 +0000 (0:00:00.137) 0:02:15.523 ******** 2026-04-09 02:48:30.488431 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:48:30.488436 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:48:30.488440 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:48:30.488444 | orchestrator | 2026-04-09 02:48:30.488448 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-09 02:48:30.488451 | orchestrator | Thursday 09 April 2026 02:48:25 +0000 (0:00:01.037) 0:02:16.560 ******** 2026-04-09 02:48:30.488455 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:48:30.488459 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:48:30.488463 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:48:30.488467 | orchestrator | 2026-04-09 02:48:30.488471 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-09 02:48:30.488474 | orchestrator | Thursday 09 April 2026 02:48:26 +0000 (0:00:00.664) 0:02:17.224 ******** 2026-04-09 02:48:30.488478 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:48:30.488482 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:48:30.488486 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:48:30.488490 | orchestrator | 2026-04-09 02:48:30.488494 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-09 02:48:30.488498 | orchestrator | Thursday 09 April 2026 02:48:27 +0000 (0:00:00.791) 0:02:18.015 ******** 2026-04-09 02:48:30.488501 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:48:30.488505 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:48:30.488509 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:48:30.488513 | orchestrator | 2026-04-09 02:48:30.488517 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-09 02:48:30.488520 | orchestrator | Thursday 09 April 2026 02:48:27 +0000 (0:00:00.625) 0:02:18.641 ******** 2026-04-09 02:48:30.488524 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:48:30.488528 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:48:30.488532 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:48:30.488537 | orchestrator | 2026-04-09 02:48:30.488544 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-09 02:48:30.488550 | orchestrator | Thursday 09 April 2026 02:48:28 +0000 (0:00:01.023) 0:02:19.664 ******** 2026-04-09 02:48:30.488555 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:48:30.488560 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:48:30.488571 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:48:30.488580 | orchestrator | 2026-04-09 02:48:30.488585 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:48:30.488593 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-09 02:48:30.488600 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-09 02:48:30.488606 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-09 02:48:30.488612 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:48:30.488623 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:48:30.488629 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 02:48:30.488635 | orchestrator | 2026-04-09 02:48:30.488641 | orchestrator | 2026-04-09 02:48:30.488660 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:48:30.488667 | orchestrator | Thursday 09 April 2026 02:48:29 +0000 (0:00:01.026) 0:02:20.691 ******** 2026-04-09 02:48:30.488673 | orchestrator | =============================================================================== 2026-04-09 02:48:30.488679 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 36.17s 2026-04-09 02:48:30.488685 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.83s 2026-04-09 02:48:30.488692 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 12.89s 2026-04-09 02:48:30.488698 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.89s 2026-04-09 02:48:30.488704 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.75s 2026-04-09 02:48:30.488723 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.98s 2026-04-09 02:48:30.488729 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.91s 2026-04-09 02:48:30.488735 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.13s 2026-04-09 02:48:30.488741 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.59s 2026-04-09 02:48:30.488747 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.01s 2026-04-09 02:48:30.488754 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.61s 2026-04-09 02:48:30.488759 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.57s 2026-04-09 02:48:30.488765 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.55s 2026-04-09 02:48:30.488771 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.48s 2026-04-09 02:48:30.488777 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.45s 2026-04-09 02:48:30.488783 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.40s 2026-04-09 02:48:30.488789 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.40s 2026-04-09 02:48:30.488796 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.21s 2026-04-09 02:48:30.488802 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.19s 2026-04-09 02:48:30.488808 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.18s 2026-04-09 02:48:30.876303 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-09 02:48:30.876413 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-04-09 02:48:33.426444 | orchestrator | 2026-04-09 02:48:33 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-09 02:48:43.564033 | orchestrator | 2026-04-09 02:48:43 | INFO  | Task 3234fc37-1aa9-49bc-93ef-a0a90ec45c47 (wipe-partitions) was prepared for execution. 2026-04-09 02:48:43.564167 | orchestrator | 2026-04-09 02:48:43 | INFO  | It takes a moment until task 3234fc37-1aa9-49bc-93ef-a0a90ec45c47 (wipe-partitions) has been started and output is visible here. 2026-04-09 02:48:57.858383 | orchestrator | 2026-04-09 02:48:57.858506 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-09 02:48:57.858528 | orchestrator | 2026-04-09 02:48:57.858542 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-09 02:48:57.858555 | orchestrator | Thursday 09 April 2026 02:48:48 +0000 (0:00:00.156) 0:00:00.156 ******** 2026-04-09 02:48:57.858598 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:48:57.858614 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:48:57.858626 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:48:57.858639 | orchestrator | 2026-04-09 02:48:57.858652 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-09 02:48:57.858664 | orchestrator | Thursday 09 April 2026 02:48:48 +0000 (0:00:00.617) 0:00:00.774 ******** 2026-04-09 02:48:57.858677 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:48:57.858690 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:48:57.858703 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:48:57.858715 | orchestrator | 2026-04-09 02:48:57.858727 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-09 02:48:57.858740 | orchestrator | Thursday 09 April 2026 02:48:49 +0000 (0:00:00.405) 0:00:01.180 ******** 2026-04-09 02:48:57.858753 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:48:57.858767 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:48:57.858781 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:48:57.858795 | orchestrator | 2026-04-09 02:48:57.858809 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-09 02:48:57.858823 | orchestrator | Thursday 09 April 2026 02:48:49 +0000 (0:00:00.622) 0:00:01.802 ******** 2026-04-09 02:48:57.858837 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:48:57.858850 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:48:57.858863 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:48:57.858876 | orchestrator | 2026-04-09 02:48:57.858890 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-09 02:48:57.858903 | orchestrator | Thursday 09 April 2026 02:48:50 +0000 (0:00:00.280) 0:00:02.083 ******** 2026-04-09 02:48:57.858916 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-09 02:48:57.858930 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-09 02:48:57.858945 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-09 02:48:57.858957 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-09 02:48:57.858972 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-09 02:48:57.858985 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-09 02:48:57.859015 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-09 02:48:57.859030 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-09 02:48:57.859044 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-09 02:48:57.859057 | orchestrator | 2026-04-09 02:48:57.859070 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-09 02:48:57.859083 | orchestrator | Thursday 09 April 2026 02:48:51 +0000 (0:00:01.283) 0:00:03.366 ******** 2026-04-09 02:48:57.859096 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-09 02:48:57.859108 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-09 02:48:57.859123 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-09 02:48:57.859137 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-09 02:48:57.859151 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-09 02:48:57.859164 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-09 02:48:57.859177 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-09 02:48:57.859191 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-09 02:48:57.859205 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-09 02:48:57.859219 | orchestrator | 2026-04-09 02:48:57.859251 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-09 02:48:57.859266 | orchestrator | Thursday 09 April 2026 02:48:53 +0000 (0:00:01.656) 0:00:05.023 ******** 2026-04-09 02:48:57.859280 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-09 02:48:57.859295 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-09 02:48:57.859309 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-09 02:48:57.859322 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-09 02:48:57.859414 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-09 02:48:57.859431 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-09 02:48:57.859443 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-09 02:48:57.859454 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-09 02:48:57.859465 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-09 02:48:57.859477 | orchestrator | 2026-04-09 02:48:57.859490 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-09 02:48:57.859502 | orchestrator | Thursday 09 April 2026 02:48:56 +0000 (0:00:03.040) 0:00:08.064 ******** 2026-04-09 02:48:57.859512 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:48:57.859525 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:48:57.859536 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:48:57.859547 | orchestrator | 2026-04-09 02:48:57.859560 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-09 02:48:57.859572 | orchestrator | Thursday 09 April 2026 02:48:56 +0000 (0:00:00.630) 0:00:08.694 ******** 2026-04-09 02:48:57.859585 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:48:57.859597 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:48:57.859608 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:48:57.859620 | orchestrator | 2026-04-09 02:48:57.859631 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:48:57.859645 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:48:57.859660 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:48:57.859702 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:48:57.859716 | orchestrator | 2026-04-09 02:48:57.859728 | orchestrator | 2026-04-09 02:48:57.859741 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:48:57.859754 | orchestrator | Thursday 09 April 2026 02:48:57 +0000 (0:00:00.657) 0:00:09.352 ******** 2026-04-09 02:48:57.859766 | orchestrator | =============================================================================== 2026-04-09 02:48:57.859778 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 3.04s 2026-04-09 02:48:57.859786 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.66s 2026-04-09 02:48:57.859794 | orchestrator | Check device availability ----------------------------------------------- 1.28s 2026-04-09 02:48:57.859800 | orchestrator | Request device events from the kernel ----------------------------------- 0.66s 2026-04-09 02:48:57.859808 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2026-04-09 02:48:57.859815 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.62s 2026-04-09 02:48:57.859822 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.62s 2026-04-09 02:48:57.859829 | orchestrator | Remove all rook related logical devices --------------------------------- 0.41s 2026-04-09 02:48:57.859841 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.28s 2026-04-09 02:49:10.556856 | orchestrator | 2026-04-09 02:49:10 | INFO  | Task ba05d64d-813a-4ec4-b617-8e1c65fc8b38 (facts) was prepared for execution. 2026-04-09 02:49:10.556947 | orchestrator | 2026-04-09 02:49:10 | INFO  | It takes a moment until task ba05d64d-813a-4ec4-b617-8e1c65fc8b38 (facts) has been started and output is visible here. 2026-04-09 02:49:24.343696 | orchestrator | 2026-04-09 02:49:24.343784 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-09 02:49:24.343795 | orchestrator | 2026-04-09 02:49:24.343801 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-09 02:49:24.343830 | orchestrator | Thursday 09 April 2026 02:49:15 +0000 (0:00:00.318) 0:00:00.318 ******** 2026-04-09 02:49:24.343836 | orchestrator | ok: [testbed-manager] 2026-04-09 02:49:24.343844 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:49:24.343849 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:49:24.343855 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:49:24.343861 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:49:24.343867 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:49:24.343872 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:49:24.343878 | orchestrator | 2026-04-09 02:49:24.343884 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-09 02:49:24.343891 | orchestrator | Thursday 09 April 2026 02:49:16 +0000 (0:00:01.198) 0:00:01.516 ******** 2026-04-09 02:49:24.343897 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:49:24.343904 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:49:24.343909 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:49:24.343915 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:49:24.343921 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:24.343926 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:49:24.343932 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:49:24.343938 | orchestrator | 2026-04-09 02:49:24.343944 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-09 02:49:24.343949 | orchestrator | 2026-04-09 02:49:24.343955 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 02:49:24.343961 | orchestrator | Thursday 09 April 2026 02:49:18 +0000 (0:00:01.458) 0:00:02.975 ******** 2026-04-09 02:49:24.343966 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:49:24.343972 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:49:24.343978 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:49:24.343984 | orchestrator | ok: [testbed-manager] 2026-04-09 02:49:24.343989 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:49:24.343995 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:49:24.344001 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:49:24.344006 | orchestrator | 2026-04-09 02:49:24.344012 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-09 02:49:24.344018 | orchestrator | 2026-04-09 02:49:24.344024 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-09 02:49:24.344029 | orchestrator | Thursday 09 April 2026 02:49:23 +0000 (0:00:05.148) 0:00:08.124 ******** 2026-04-09 02:49:24.344035 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:49:24.344041 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:49:24.344047 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:49:24.344052 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:49:24.344058 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:24.344064 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:49:24.344069 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:49:24.344075 | orchestrator | 2026-04-09 02:49:24.344081 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:49:24.344087 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:49:24.344151 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:49:24.344161 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:49:24.344167 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:49:24.344173 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:49:24.344178 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:49:24.344189 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:49:24.344195 | orchestrator | 2026-04-09 02:49:24.344201 | orchestrator | 2026-04-09 02:49:24.344206 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:49:24.344212 | orchestrator | Thursday 09 April 2026 02:49:23 +0000 (0:00:00.621) 0:00:08.745 ******** 2026-04-09 02:49:24.344218 | orchestrator | =============================================================================== 2026-04-09 02:49:24.344224 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.15s 2026-04-09 02:49:24.344229 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.46s 2026-04-09 02:49:24.344235 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.20s 2026-04-09 02:49:24.344241 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.62s 2026-04-09 02:49:27.080324 | orchestrator | 2026-04-09 02:49:27 | INFO  | Task 549825e3-1ddd-450a-a1ca-e2b19897c893 (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-09 02:49:27.080441 | orchestrator | 2026-04-09 02:49:27 | INFO  | It takes a moment until task 549825e3-1ddd-450a-a1ca-e2b19897c893 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-09 02:49:40.627631 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-09 02:49:40.627730 | orchestrator | 2.16.14 2026-04-09 02:49:40.627743 | orchestrator | 2026-04-09 02:49:40.627751 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-09 02:49:40.627759 | orchestrator | 2026-04-09 02:49:40.627766 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 02:49:40.627773 | orchestrator | Thursday 09 April 2026 02:49:32 +0000 (0:00:00.380) 0:00:00.380 ******** 2026-04-09 02:49:40.627781 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 02:49:40.627788 | orchestrator | 2026-04-09 02:49:40.627808 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-09 02:49:40.627815 | orchestrator | Thursday 09 April 2026 02:49:32 +0000 (0:00:00.267) 0:00:00.648 ******** 2026-04-09 02:49:40.627822 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:49:40.627828 | orchestrator | 2026-04-09 02:49:40.627832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:40.627836 | orchestrator | Thursday 09 April 2026 02:49:32 +0000 (0:00:00.273) 0:00:00.921 ******** 2026-04-09 02:49:40.627840 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-09 02:49:40.627844 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-09 02:49:40.627848 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-09 02:49:40.627852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-09 02:49:40.627856 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-09 02:49:40.627860 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-09 02:49:40.627863 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-09 02:49:40.627867 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-09 02:49:40.627871 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-09 02:49:40.627875 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-09 02:49:40.627879 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-09 02:49:40.627882 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-09 02:49:40.627903 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-09 02:49:40.627907 | orchestrator | 2026-04-09 02:49:40.627911 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:40.627914 | orchestrator | Thursday 09 April 2026 02:49:33 +0000 (0:00:00.583) 0:00:01.505 ******** 2026-04-09 02:49:40.627918 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:40.627923 | orchestrator | 2026-04-09 02:49:40.627926 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:40.627930 | orchestrator | Thursday 09 April 2026 02:49:33 +0000 (0:00:00.280) 0:00:01.786 ******** 2026-04-09 02:49:40.627934 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:40.627938 | orchestrator | 2026-04-09 02:49:40.627941 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:40.627945 | orchestrator | Thursday 09 April 2026 02:49:33 +0000 (0:00:00.224) 0:00:02.010 ******** 2026-04-09 02:49:40.627949 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:40.627953 | orchestrator | 2026-04-09 02:49:40.627956 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:40.627960 | orchestrator | Thursday 09 April 2026 02:49:34 +0000 (0:00:00.234) 0:00:02.245 ******** 2026-04-09 02:49:40.627964 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:40.627968 | orchestrator | 2026-04-09 02:49:40.627971 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:40.627975 | orchestrator | Thursday 09 April 2026 02:49:34 +0000 (0:00:00.275) 0:00:02.521 ******** 2026-04-09 02:49:40.627979 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:40.627995 | orchestrator | 2026-04-09 02:49:40.628007 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:40.628014 | orchestrator | Thursday 09 April 2026 02:49:34 +0000 (0:00:00.252) 0:00:02.774 ******** 2026-04-09 02:49:40.628020 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:40.628026 | orchestrator | 2026-04-09 02:49:40.628033 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:40.628039 | orchestrator | Thursday 09 April 2026 02:49:34 +0000 (0:00:00.247) 0:00:03.021 ******** 2026-04-09 02:49:40.628045 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:40.628051 | orchestrator | 2026-04-09 02:49:40.628058 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:40.628065 | orchestrator | Thursday 09 April 2026 02:49:35 +0000 (0:00:00.241) 0:00:03.263 ******** 2026-04-09 02:49:40.628073 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:40.628080 | orchestrator | 2026-04-09 02:49:40.628087 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:40.628093 | orchestrator | Thursday 09 April 2026 02:49:35 +0000 (0:00:00.224) 0:00:03.487 ******** 2026-04-09 02:49:40.628097 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411) 2026-04-09 02:49:40.628103 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411) 2026-04-09 02:49:40.628108 | orchestrator | 2026-04-09 02:49:40.628114 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:40.628134 | orchestrator | Thursday 09 April 2026 02:49:35 +0000 (0:00:00.470) 0:00:03.958 ******** 2026-04-09 02:49:40.628138 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761) 2026-04-09 02:49:40.628142 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761) 2026-04-09 02:49:40.628146 | orchestrator | 2026-04-09 02:49:40.628149 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:40.628153 | orchestrator | Thursday 09 April 2026 02:49:36 +0000 (0:00:00.721) 0:00:04.679 ******** 2026-04-09 02:49:40.628162 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad) 2026-04-09 02:49:40.628171 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad) 2026-04-09 02:49:40.628175 | orchestrator | 2026-04-09 02:49:40.628179 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:40.628182 | orchestrator | Thursday 09 April 2026 02:49:37 +0000 (0:00:00.765) 0:00:05.444 ******** 2026-04-09 02:49:40.628186 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be) 2026-04-09 02:49:40.628190 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be) 2026-04-09 02:49:40.628193 | orchestrator | 2026-04-09 02:49:40.628197 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:40.628201 | orchestrator | Thursday 09 April 2026 02:49:38 +0000 (0:00:01.002) 0:00:06.446 ******** 2026-04-09 02:49:40.628204 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-09 02:49:40.628209 | orchestrator | 2026-04-09 02:49:40.628212 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:49:40.628216 | orchestrator | Thursday 09 April 2026 02:49:38 +0000 (0:00:00.381) 0:00:06.828 ******** 2026-04-09 02:49:40.628220 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-09 02:49:40.628223 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-09 02:49:40.628227 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-09 02:49:40.628233 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-09 02:49:40.628239 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-09 02:49:40.628245 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-09 02:49:40.628254 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-09 02:49:40.628263 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-09 02:49:40.628268 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-09 02:49:40.628274 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-09 02:49:40.628280 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-09 02:49:40.628286 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-09 02:49:40.628291 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-09 02:49:40.628297 | orchestrator | 2026-04-09 02:49:40.628303 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:49:40.628308 | orchestrator | Thursday 09 April 2026 02:49:39 +0000 (0:00:00.414) 0:00:07.242 ******** 2026-04-09 02:49:40.628313 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:40.628320 | orchestrator | 2026-04-09 02:49:40.628325 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:49:40.628331 | orchestrator | Thursday 09 April 2026 02:49:39 +0000 (0:00:00.228) 0:00:07.471 ******** 2026-04-09 02:49:40.628337 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:40.628342 | orchestrator | 2026-04-09 02:49:40.628369 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:49:40.628375 | orchestrator | Thursday 09 April 2026 02:49:39 +0000 (0:00:00.221) 0:00:07.692 ******** 2026-04-09 02:49:40.628381 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:40.628387 | orchestrator | 2026-04-09 02:49:40.628393 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:49:40.628399 | orchestrator | Thursday 09 April 2026 02:49:39 +0000 (0:00:00.236) 0:00:07.929 ******** 2026-04-09 02:49:40.628412 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:40.628416 | orchestrator | 2026-04-09 02:49:40.628419 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:49:40.628423 | orchestrator | Thursday 09 April 2026 02:49:39 +0000 (0:00:00.219) 0:00:08.149 ******** 2026-04-09 02:49:40.628427 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:40.628431 | orchestrator | 2026-04-09 02:49:40.628435 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:49:40.628438 | orchestrator | Thursday 09 April 2026 02:49:40 +0000 (0:00:00.232) 0:00:08.381 ******** 2026-04-09 02:49:40.628442 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:40.628446 | orchestrator | 2026-04-09 02:49:40.628451 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:49:40.628457 | orchestrator | Thursday 09 April 2026 02:49:40 +0000 (0:00:00.226) 0:00:08.608 ******** 2026-04-09 02:49:40.628461 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:40.628464 | orchestrator | 2026-04-09 02:49:40.628473 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:49:49.075955 | orchestrator | Thursday 09 April 2026 02:49:40 +0000 (0:00:00.224) 0:00:08.833 ******** 2026-04-09 02:49:49.076046 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:49.076061 | orchestrator | 2026-04-09 02:49:49.076073 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:49:49.076084 | orchestrator | Thursday 09 April 2026 02:49:40 +0000 (0:00:00.197) 0:00:09.031 ******** 2026-04-09 02:49:49.076094 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-09 02:49:49.076105 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-09 02:49:49.076131 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-09 02:49:49.076142 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-09 02:49:49.076153 | orchestrator | 2026-04-09 02:49:49.076163 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:49:49.076174 | orchestrator | Thursday 09 April 2026 02:49:42 +0000 (0:00:01.251) 0:00:10.282 ******** 2026-04-09 02:49:49.076185 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:49.076196 | orchestrator | 2026-04-09 02:49:49.076206 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:49:49.076217 | orchestrator | Thursday 09 April 2026 02:49:42 +0000 (0:00:00.234) 0:00:10.516 ******** 2026-04-09 02:49:49.076227 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:49.076238 | orchestrator | 2026-04-09 02:49:49.076248 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:49:49.076254 | orchestrator | Thursday 09 April 2026 02:49:42 +0000 (0:00:00.215) 0:00:10.732 ******** 2026-04-09 02:49:49.076261 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:49.076267 | orchestrator | 2026-04-09 02:49:49.076273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:49:49.076279 | orchestrator | Thursday 09 April 2026 02:49:42 +0000 (0:00:00.257) 0:00:10.989 ******** 2026-04-09 02:49:49.076286 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:49.076292 | orchestrator | 2026-04-09 02:49:49.076298 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-09 02:49:49.076304 | orchestrator | Thursday 09 April 2026 02:49:43 +0000 (0:00:00.232) 0:00:11.222 ******** 2026-04-09 02:49:49.076311 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-09 02:49:49.076317 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-09 02:49:49.076323 | orchestrator | 2026-04-09 02:49:49.076329 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-09 02:49:49.076336 | orchestrator | Thursday 09 April 2026 02:49:43 +0000 (0:00:00.190) 0:00:11.412 ******** 2026-04-09 02:49:49.076390 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:49.076399 | orchestrator | 2026-04-09 02:49:49.076405 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-09 02:49:49.076412 | orchestrator | Thursday 09 April 2026 02:49:43 +0000 (0:00:00.146) 0:00:11.559 ******** 2026-04-09 02:49:49.076440 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:49.076447 | orchestrator | 2026-04-09 02:49:49.076454 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-09 02:49:49.076460 | orchestrator | Thursday 09 April 2026 02:49:43 +0000 (0:00:00.180) 0:00:11.739 ******** 2026-04-09 02:49:49.076466 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:49.076472 | orchestrator | 2026-04-09 02:49:49.076478 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-09 02:49:49.076484 | orchestrator | Thursday 09 April 2026 02:49:43 +0000 (0:00:00.168) 0:00:11.908 ******** 2026-04-09 02:49:49.076490 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:49:49.076497 | orchestrator | 2026-04-09 02:49:49.076503 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-09 02:49:49.076513 | orchestrator | Thursday 09 April 2026 02:49:43 +0000 (0:00:00.149) 0:00:12.058 ******** 2026-04-09 02:49:49.076528 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2f59a7c8-f88e-51a3-9620-37640e0ff9b5'}}) 2026-04-09 02:49:49.076547 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1db77c01-2d77-5e1e-8d0a-4e535706b141'}}) 2026-04-09 02:49:49.076558 | orchestrator | 2026-04-09 02:49:49.076569 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-09 02:49:49.076580 | orchestrator | Thursday 09 April 2026 02:49:44 +0000 (0:00:00.193) 0:00:12.251 ******** 2026-04-09 02:49:49.076592 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2f59a7c8-f88e-51a3-9620-37640e0ff9b5'}})  2026-04-09 02:49:49.076606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1db77c01-2d77-5e1e-8d0a-4e535706b141'}})  2026-04-09 02:49:49.076617 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:49.076628 | orchestrator | 2026-04-09 02:49:49.076639 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-09 02:49:49.076649 | orchestrator | Thursday 09 April 2026 02:49:44 +0000 (0:00:00.396) 0:00:12.647 ******** 2026-04-09 02:49:49.076660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2f59a7c8-f88e-51a3-9620-37640e0ff9b5'}})  2026-04-09 02:49:49.076672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1db77c01-2d77-5e1e-8d0a-4e535706b141'}})  2026-04-09 02:49:49.076683 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:49.076694 | orchestrator | 2026-04-09 02:49:49.076705 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-09 02:49:49.076717 | orchestrator | Thursday 09 April 2026 02:49:44 +0000 (0:00:00.175) 0:00:12.823 ******** 2026-04-09 02:49:49.076728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2f59a7c8-f88e-51a3-9620-37640e0ff9b5'}})  2026-04-09 02:49:49.076759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1db77c01-2d77-5e1e-8d0a-4e535706b141'}})  2026-04-09 02:49:49.076772 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:49.076784 | orchestrator | 2026-04-09 02:49:49.076797 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-09 02:49:49.076806 | orchestrator | Thursday 09 April 2026 02:49:44 +0000 (0:00:00.158) 0:00:12.981 ******** 2026-04-09 02:49:49.076814 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:49:49.076821 | orchestrator | 2026-04-09 02:49:49.076829 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-09 02:49:49.076842 | orchestrator | Thursday 09 April 2026 02:49:44 +0000 (0:00:00.154) 0:00:13.135 ******** 2026-04-09 02:49:49.076850 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:49:49.076857 | orchestrator | 2026-04-09 02:49:49.076864 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-09 02:49:49.076872 | orchestrator | Thursday 09 April 2026 02:49:45 +0000 (0:00:00.158) 0:00:13.294 ******** 2026-04-09 02:49:49.076887 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:49.076893 | orchestrator | 2026-04-09 02:49:49.076899 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-09 02:49:49.076906 | orchestrator | Thursday 09 April 2026 02:49:45 +0000 (0:00:00.143) 0:00:13.437 ******** 2026-04-09 02:49:49.076912 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:49.076920 | orchestrator | 2026-04-09 02:49:49.076930 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-09 02:49:49.076946 | orchestrator | Thursday 09 April 2026 02:49:45 +0000 (0:00:00.152) 0:00:13.589 ******** 2026-04-09 02:49:49.076955 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:49.076965 | orchestrator | 2026-04-09 02:49:49.076974 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-09 02:49:49.076984 | orchestrator | Thursday 09 April 2026 02:49:45 +0000 (0:00:00.144) 0:00:13.734 ******** 2026-04-09 02:49:49.076993 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 02:49:49.077002 | orchestrator |  "ceph_osd_devices": { 2026-04-09 02:49:49.077012 | orchestrator |  "sdb": { 2026-04-09 02:49:49.077022 | orchestrator |  "osd_lvm_uuid": "2f59a7c8-f88e-51a3-9620-37640e0ff9b5" 2026-04-09 02:49:49.077033 | orchestrator |  }, 2026-04-09 02:49:49.077043 | orchestrator |  "sdc": { 2026-04-09 02:49:49.077051 | orchestrator |  "osd_lvm_uuid": "1db77c01-2d77-5e1e-8d0a-4e535706b141" 2026-04-09 02:49:49.077057 | orchestrator |  } 2026-04-09 02:49:49.077064 | orchestrator |  } 2026-04-09 02:49:49.077070 | orchestrator | } 2026-04-09 02:49:49.077076 | orchestrator | 2026-04-09 02:49:49.077082 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-09 02:49:49.077089 | orchestrator | Thursday 09 April 2026 02:49:45 +0000 (0:00:00.163) 0:00:13.898 ******** 2026-04-09 02:49:49.077095 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:49.077101 | orchestrator | 2026-04-09 02:49:49.077107 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-09 02:49:49.077113 | orchestrator | Thursday 09 April 2026 02:49:45 +0000 (0:00:00.156) 0:00:14.054 ******** 2026-04-09 02:49:49.077120 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:49.077126 | orchestrator | 2026-04-09 02:49:49.077132 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-09 02:49:49.077138 | orchestrator | Thursday 09 April 2026 02:49:45 +0000 (0:00:00.137) 0:00:14.191 ******** 2026-04-09 02:49:49.077144 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:49:49.077150 | orchestrator | 2026-04-09 02:49:49.077156 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-09 02:49:49.077162 | orchestrator | Thursday 09 April 2026 02:49:46 +0000 (0:00:00.143) 0:00:14.334 ******** 2026-04-09 02:49:49.077168 | orchestrator | changed: [testbed-node-3] => { 2026-04-09 02:49:49.077175 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-09 02:49:49.077181 | orchestrator |  "ceph_osd_devices": { 2026-04-09 02:49:49.077187 | orchestrator |  "sdb": { 2026-04-09 02:49:49.077193 | orchestrator |  "osd_lvm_uuid": "2f59a7c8-f88e-51a3-9620-37640e0ff9b5" 2026-04-09 02:49:49.077199 | orchestrator |  }, 2026-04-09 02:49:49.077206 | orchestrator |  "sdc": { 2026-04-09 02:49:49.077212 | orchestrator |  "osd_lvm_uuid": "1db77c01-2d77-5e1e-8d0a-4e535706b141" 2026-04-09 02:49:49.077218 | orchestrator |  } 2026-04-09 02:49:49.077225 | orchestrator |  }, 2026-04-09 02:49:49.077231 | orchestrator |  "lvm_volumes": [ 2026-04-09 02:49:49.077237 | orchestrator |  { 2026-04-09 02:49:49.077243 | orchestrator |  "data": "osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5", 2026-04-09 02:49:49.077250 | orchestrator |  "data_vg": "ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5" 2026-04-09 02:49:49.077256 | orchestrator |  }, 2026-04-09 02:49:49.077262 | orchestrator |  { 2026-04-09 02:49:49.077268 | orchestrator |  "data": "osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141", 2026-04-09 02:49:49.077281 | orchestrator |  "data_vg": "ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141" 2026-04-09 02:49:49.077288 | orchestrator |  } 2026-04-09 02:49:49.077294 | orchestrator |  ] 2026-04-09 02:49:49.077300 | orchestrator |  } 2026-04-09 02:49:49.077306 | orchestrator | } 2026-04-09 02:49:49.077312 | orchestrator | 2026-04-09 02:49:49.077318 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-09 02:49:49.077325 | orchestrator | Thursday 09 April 2026 02:49:46 +0000 (0:00:00.457) 0:00:14.792 ******** 2026-04-09 02:49:49.077331 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 02:49:49.077337 | orchestrator | 2026-04-09 02:49:49.077369 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-09 02:49:49.077376 | orchestrator | 2026-04-09 02:49:49.077382 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 02:49:49.077389 | orchestrator | Thursday 09 April 2026 02:49:48 +0000 (0:00:01.928) 0:00:16.721 ******** 2026-04-09 02:49:49.077395 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-09 02:49:49.077401 | orchestrator | 2026-04-09 02:49:49.077407 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-09 02:49:49.077413 | orchestrator | Thursday 09 April 2026 02:49:48 +0000 (0:00:00.271) 0:00:16.993 ******** 2026-04-09 02:49:49.077419 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:49:49.077426 | orchestrator | 2026-04-09 02:49:49.077439 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:59.408717 | orchestrator | Thursday 09 April 2026 02:49:49 +0000 (0:00:00.294) 0:00:17.287 ******** 2026-04-09 02:49:59.408835 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-09 02:49:59.408850 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-09 02:49:59.408858 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-09 02:49:59.408877 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-09 02:49:59.408881 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-09 02:49:59.408885 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-09 02:49:59.408889 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-09 02:49:59.408893 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-09 02:49:59.408897 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-09 02:49:59.408901 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-09 02:49:59.408905 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-09 02:49:59.408909 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-09 02:49:59.408943 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-09 02:49:59.408950 | orchestrator | 2026-04-09 02:49:59.408957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:59.408964 | orchestrator | Thursday 09 April 2026 02:49:49 +0000 (0:00:00.451) 0:00:17.739 ******** 2026-04-09 02:49:59.408971 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:49:59.408979 | orchestrator | 2026-04-09 02:49:59.408985 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:59.408990 | orchestrator | Thursday 09 April 2026 02:49:49 +0000 (0:00:00.211) 0:00:17.950 ******** 2026-04-09 02:49:59.408997 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:49:59.409002 | orchestrator | 2026-04-09 02:49:59.409008 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:59.409015 | orchestrator | Thursday 09 April 2026 02:49:49 +0000 (0:00:00.228) 0:00:18.179 ******** 2026-04-09 02:49:59.409041 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:49:59.409048 | orchestrator | 2026-04-09 02:49:59.409054 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:59.409061 | orchestrator | Thursday 09 April 2026 02:49:50 +0000 (0:00:00.246) 0:00:18.425 ******** 2026-04-09 02:49:59.409067 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:49:59.409073 | orchestrator | 2026-04-09 02:49:59.409080 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:59.409084 | orchestrator | Thursday 09 April 2026 02:49:50 +0000 (0:00:00.720) 0:00:19.145 ******** 2026-04-09 02:49:59.409088 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:49:59.409092 | orchestrator | 2026-04-09 02:49:59.409096 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:59.409099 | orchestrator | Thursday 09 April 2026 02:49:51 +0000 (0:00:00.221) 0:00:19.367 ******** 2026-04-09 02:49:59.409103 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:49:59.409107 | orchestrator | 2026-04-09 02:49:59.409110 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:59.409114 | orchestrator | Thursday 09 April 2026 02:49:51 +0000 (0:00:00.233) 0:00:19.600 ******** 2026-04-09 02:49:59.409118 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:49:59.409122 | orchestrator | 2026-04-09 02:49:59.409125 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:59.409129 | orchestrator | Thursday 09 April 2026 02:49:51 +0000 (0:00:00.243) 0:00:19.844 ******** 2026-04-09 02:49:59.409133 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:49:59.409137 | orchestrator | 2026-04-09 02:49:59.409140 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:59.409144 | orchestrator | Thursday 09 April 2026 02:49:51 +0000 (0:00:00.225) 0:00:20.070 ******** 2026-04-09 02:49:59.409148 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be) 2026-04-09 02:49:59.409153 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be) 2026-04-09 02:49:59.409157 | orchestrator | 2026-04-09 02:49:59.409161 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:59.409167 | orchestrator | Thursday 09 April 2026 02:49:52 +0000 (0:00:00.501) 0:00:20.571 ******** 2026-04-09 02:49:59.409173 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74) 2026-04-09 02:49:59.409178 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74) 2026-04-09 02:49:59.409184 | orchestrator | 2026-04-09 02:49:59.409190 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:59.409196 | orchestrator | Thursday 09 April 2026 02:49:52 +0000 (0:00:00.523) 0:00:21.094 ******** 2026-04-09 02:49:59.409202 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf) 2026-04-09 02:49:59.409209 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf) 2026-04-09 02:49:59.409215 | orchestrator | 2026-04-09 02:49:59.409221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:59.409243 | orchestrator | Thursday 09 April 2026 02:49:53 +0000 (0:00:00.502) 0:00:21.597 ******** 2026-04-09 02:49:59.409250 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105) 2026-04-09 02:49:59.409256 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105) 2026-04-09 02:49:59.409263 | orchestrator | 2026-04-09 02:49:59.409269 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:49:59.409278 | orchestrator | Thursday 09 April 2026 02:49:54 +0000 (0:00:00.754) 0:00:22.351 ******** 2026-04-09 02:49:59.409282 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-09 02:49:59.409292 | orchestrator | 2026-04-09 02:49:59.409296 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:49:59.409300 | orchestrator | Thursday 09 April 2026 02:49:54 +0000 (0:00:00.649) 0:00:23.000 ******** 2026-04-09 02:49:59.409305 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-09 02:49:59.409309 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-09 02:49:59.409314 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-09 02:49:59.409318 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-09 02:49:59.409323 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-09 02:49:59.409327 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-09 02:49:59.409332 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-09 02:49:59.409356 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-09 02:49:59.409361 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-09 02:49:59.409366 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-09 02:49:59.409370 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-09 02:49:59.409375 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-09 02:49:59.409379 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-09 02:49:59.409383 | orchestrator | 2026-04-09 02:49:59.409388 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:49:59.409392 | orchestrator | Thursday 09 April 2026 02:49:55 +0000 (0:00:00.965) 0:00:23.965 ******** 2026-04-09 02:49:59.409396 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:49:59.409400 | orchestrator | 2026-04-09 02:49:59.409405 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:49:59.409409 | orchestrator | Thursday 09 April 2026 02:49:55 +0000 (0:00:00.208) 0:00:24.173 ******** 2026-04-09 02:49:59.409414 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:49:59.409418 | orchestrator | 2026-04-09 02:49:59.409422 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:49:59.409427 | orchestrator | Thursday 09 April 2026 02:49:56 +0000 (0:00:00.218) 0:00:24.392 ******** 2026-04-09 02:49:59.409431 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:49:59.409435 | orchestrator | 2026-04-09 02:49:59.409440 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:49:59.409444 | orchestrator | Thursday 09 April 2026 02:49:56 +0000 (0:00:00.242) 0:00:24.634 ******** 2026-04-09 02:49:59.409448 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:49:59.409453 | orchestrator | 2026-04-09 02:49:59.409457 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:49:59.409461 | orchestrator | Thursday 09 April 2026 02:49:56 +0000 (0:00:00.229) 0:00:24.864 ******** 2026-04-09 02:49:59.409466 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:49:59.409471 | orchestrator | 2026-04-09 02:49:59.409475 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:49:59.409480 | orchestrator | Thursday 09 April 2026 02:49:56 +0000 (0:00:00.233) 0:00:25.098 ******** 2026-04-09 02:49:59.409484 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:49:59.409488 | orchestrator | 2026-04-09 02:49:59.409491 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:49:59.409495 | orchestrator | Thursday 09 April 2026 02:49:57 +0000 (0:00:00.242) 0:00:25.341 ******** 2026-04-09 02:49:59.409502 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:49:59.409506 | orchestrator | 2026-04-09 02:49:59.409510 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:49:59.409514 | orchestrator | Thursday 09 April 2026 02:49:57 +0000 (0:00:00.239) 0:00:25.580 ******** 2026-04-09 02:49:59.409517 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:49:59.409521 | orchestrator | 2026-04-09 02:49:59.409525 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:49:59.409528 | orchestrator | Thursday 09 April 2026 02:49:57 +0000 (0:00:00.232) 0:00:25.813 ******** 2026-04-09 02:49:59.409532 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-09 02:49:59.409536 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-09 02:49:59.409541 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-09 02:49:59.409544 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-09 02:49:59.409548 | orchestrator | 2026-04-09 02:49:59.409552 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:49:59.409556 | orchestrator | Thursday 09 April 2026 02:49:58 +0000 (0:00:01.060) 0:00:26.874 ******** 2026-04-09 02:49:59.409559 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:50:06.107127 | orchestrator | 2026-04-09 02:50:06.107265 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:50:06.107291 | orchestrator | Thursday 09 April 2026 02:49:59 +0000 (0:00:00.741) 0:00:27.615 ******** 2026-04-09 02:50:06.107310 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:50:06.107330 | orchestrator | 2026-04-09 02:50:06.107452 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:50:06.107470 | orchestrator | Thursday 09 April 2026 02:49:59 +0000 (0:00:00.229) 0:00:27.844 ******** 2026-04-09 02:50:06.107506 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:50:06.107523 | orchestrator | 2026-04-09 02:50:06.107539 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:50:06.107555 | orchestrator | Thursday 09 April 2026 02:49:59 +0000 (0:00:00.244) 0:00:28.088 ******** 2026-04-09 02:50:06.107573 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:50:06.107588 | orchestrator | 2026-04-09 02:50:06.107604 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-09 02:50:06.107619 | orchestrator | Thursday 09 April 2026 02:50:00 +0000 (0:00:00.255) 0:00:28.343 ******** 2026-04-09 02:50:06.107635 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-09 02:50:06.107651 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-09 02:50:06.107667 | orchestrator | 2026-04-09 02:50:06.107684 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-09 02:50:06.107700 | orchestrator | Thursday 09 April 2026 02:50:00 +0000 (0:00:00.212) 0:00:28.556 ******** 2026-04-09 02:50:06.107717 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:50:06.107733 | orchestrator | 2026-04-09 02:50:06.107750 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-09 02:50:06.107766 | orchestrator | Thursday 09 April 2026 02:50:00 +0000 (0:00:00.143) 0:00:28.699 ******** 2026-04-09 02:50:06.107781 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:50:06.107799 | orchestrator | 2026-04-09 02:50:06.107817 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-09 02:50:06.107832 | orchestrator | Thursday 09 April 2026 02:50:00 +0000 (0:00:00.146) 0:00:28.845 ******** 2026-04-09 02:50:06.107848 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:50:06.107866 | orchestrator | 2026-04-09 02:50:06.107882 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-09 02:50:06.107899 | orchestrator | Thursday 09 April 2026 02:50:00 +0000 (0:00:00.156) 0:00:29.002 ******** 2026-04-09 02:50:06.107915 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:50:06.107932 | orchestrator | 2026-04-09 02:50:06.107949 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-09 02:50:06.107965 | orchestrator | Thursday 09 April 2026 02:50:00 +0000 (0:00:00.151) 0:00:29.154 ******** 2026-04-09 02:50:06.108008 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '68e90870-4763-57e7-8e76-63c40a6d6d6f'}}) 2026-04-09 02:50:06.108025 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9961abb4-5e3b-57c6-b852-cf206941d3b6'}}) 2026-04-09 02:50:06.108041 | orchestrator | 2026-04-09 02:50:06.108057 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-09 02:50:06.108072 | orchestrator | Thursday 09 April 2026 02:50:01 +0000 (0:00:00.211) 0:00:29.366 ******** 2026-04-09 02:50:06.108090 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '68e90870-4763-57e7-8e76-63c40a6d6d6f'}})  2026-04-09 02:50:06.108107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9961abb4-5e3b-57c6-b852-cf206941d3b6'}})  2026-04-09 02:50:06.108123 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:50:06.108139 | orchestrator | 2026-04-09 02:50:06.108154 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-09 02:50:06.108169 | orchestrator | Thursday 09 April 2026 02:50:01 +0000 (0:00:00.171) 0:00:29.537 ******** 2026-04-09 02:50:06.108184 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '68e90870-4763-57e7-8e76-63c40a6d6d6f'}})  2026-04-09 02:50:06.108201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9961abb4-5e3b-57c6-b852-cf206941d3b6'}})  2026-04-09 02:50:06.108216 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:50:06.108231 | orchestrator | 2026-04-09 02:50:06.108248 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-09 02:50:06.108263 | orchestrator | Thursday 09 April 2026 02:50:01 +0000 (0:00:00.420) 0:00:29.958 ******** 2026-04-09 02:50:06.108277 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '68e90870-4763-57e7-8e76-63c40a6d6d6f'}})  2026-04-09 02:50:06.108294 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9961abb4-5e3b-57c6-b852-cf206941d3b6'}})  2026-04-09 02:50:06.108309 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:50:06.108324 | orchestrator | 2026-04-09 02:50:06.108376 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-09 02:50:06.108394 | orchestrator | Thursday 09 April 2026 02:50:01 +0000 (0:00:00.177) 0:00:30.135 ******** 2026-04-09 02:50:06.108410 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:50:06.108425 | orchestrator | 2026-04-09 02:50:06.108441 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-09 02:50:06.108456 | orchestrator | Thursday 09 April 2026 02:50:02 +0000 (0:00:00.163) 0:00:30.299 ******** 2026-04-09 02:50:06.108472 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:50:06.108489 | orchestrator | 2026-04-09 02:50:06.108505 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-09 02:50:06.108522 | orchestrator | Thursday 09 April 2026 02:50:02 +0000 (0:00:00.152) 0:00:30.451 ******** 2026-04-09 02:50:06.108568 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:50:06.108587 | orchestrator | 2026-04-09 02:50:06.108604 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-09 02:50:06.108620 | orchestrator | Thursday 09 April 2026 02:50:02 +0000 (0:00:00.145) 0:00:30.597 ******** 2026-04-09 02:50:06.108636 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:50:06.108652 | orchestrator | 2026-04-09 02:50:06.108669 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-09 02:50:06.108685 | orchestrator | Thursday 09 April 2026 02:50:02 +0000 (0:00:00.165) 0:00:30.762 ******** 2026-04-09 02:50:06.108713 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:50:06.108730 | orchestrator | 2026-04-09 02:50:06.108744 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-09 02:50:06.108759 | orchestrator | Thursday 09 April 2026 02:50:02 +0000 (0:00:00.159) 0:00:30.922 ******** 2026-04-09 02:50:06.108789 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 02:50:06.108805 | orchestrator |  "ceph_osd_devices": { 2026-04-09 02:50:06.108820 | orchestrator |  "sdb": { 2026-04-09 02:50:06.108836 | orchestrator |  "osd_lvm_uuid": "68e90870-4763-57e7-8e76-63c40a6d6d6f" 2026-04-09 02:50:06.108852 | orchestrator |  }, 2026-04-09 02:50:06.108868 | orchestrator |  "sdc": { 2026-04-09 02:50:06.108884 | orchestrator |  "osd_lvm_uuid": "9961abb4-5e3b-57c6-b852-cf206941d3b6" 2026-04-09 02:50:06.108899 | orchestrator |  } 2026-04-09 02:50:06.108914 | orchestrator |  } 2026-04-09 02:50:06.108929 | orchestrator | } 2026-04-09 02:50:06.108946 | orchestrator | 2026-04-09 02:50:06.108962 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-09 02:50:06.108977 | orchestrator | Thursday 09 April 2026 02:50:02 +0000 (0:00:00.149) 0:00:31.071 ******** 2026-04-09 02:50:06.108993 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:50:06.109008 | orchestrator | 2026-04-09 02:50:06.109023 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-09 02:50:06.109039 | orchestrator | Thursday 09 April 2026 02:50:02 +0000 (0:00:00.142) 0:00:31.214 ******** 2026-04-09 02:50:06.109053 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:50:06.109068 | orchestrator | 2026-04-09 02:50:06.109083 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-09 02:50:06.109099 | orchestrator | Thursday 09 April 2026 02:50:03 +0000 (0:00:00.156) 0:00:31.370 ******** 2026-04-09 02:50:06.109115 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:50:06.109130 | orchestrator | 2026-04-09 02:50:06.109144 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-09 02:50:06.109160 | orchestrator | Thursday 09 April 2026 02:50:03 +0000 (0:00:00.135) 0:00:31.505 ******** 2026-04-09 02:50:06.109176 | orchestrator | changed: [testbed-node-4] => { 2026-04-09 02:50:06.109192 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-09 02:50:06.109208 | orchestrator |  "ceph_osd_devices": { 2026-04-09 02:50:06.109224 | orchestrator |  "sdb": { 2026-04-09 02:50:06.109240 | orchestrator |  "osd_lvm_uuid": "68e90870-4763-57e7-8e76-63c40a6d6d6f" 2026-04-09 02:50:06.109257 | orchestrator |  }, 2026-04-09 02:50:06.109272 | orchestrator |  "sdc": { 2026-04-09 02:50:06.109288 | orchestrator |  "osd_lvm_uuid": "9961abb4-5e3b-57c6-b852-cf206941d3b6" 2026-04-09 02:50:06.109303 | orchestrator |  } 2026-04-09 02:50:06.109319 | orchestrator |  }, 2026-04-09 02:50:06.109360 | orchestrator |  "lvm_volumes": [ 2026-04-09 02:50:06.109378 | orchestrator |  { 2026-04-09 02:50:06.109396 | orchestrator |  "data": "osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f", 2026-04-09 02:50:06.109413 | orchestrator |  "data_vg": "ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f" 2026-04-09 02:50:06.109429 | orchestrator |  }, 2026-04-09 02:50:06.109446 | orchestrator |  { 2026-04-09 02:50:06.109461 | orchestrator |  "data": "osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6", 2026-04-09 02:50:06.109478 | orchestrator |  "data_vg": "ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6" 2026-04-09 02:50:06.109495 | orchestrator |  } 2026-04-09 02:50:06.109513 | orchestrator |  ] 2026-04-09 02:50:06.109531 | orchestrator |  } 2026-04-09 02:50:06.109547 | orchestrator | } 2026-04-09 02:50:06.109565 | orchestrator | 2026-04-09 02:50:06.109583 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-09 02:50:06.109600 | orchestrator | Thursday 09 April 2026 02:50:03 +0000 (0:00:00.523) 0:00:32.029 ******** 2026-04-09 02:50:06.109618 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-09 02:50:06.109635 | orchestrator | 2026-04-09 02:50:06.109650 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-09 02:50:06.109666 | orchestrator | 2026-04-09 02:50:06.109681 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 02:50:06.109716 | orchestrator | Thursday 09 April 2026 02:50:05 +0000 (0:00:01.328) 0:00:33.358 ******** 2026-04-09 02:50:06.109734 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-09 02:50:06.109751 | orchestrator | 2026-04-09 02:50:06.109769 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-09 02:50:06.109787 | orchestrator | Thursday 09 April 2026 02:50:05 +0000 (0:00:00.301) 0:00:33.659 ******** 2026-04-09 02:50:06.109805 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:50:06.109820 | orchestrator | 2026-04-09 02:50:06.109837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:50:06.109853 | orchestrator | Thursday 09 April 2026 02:50:05 +0000 (0:00:00.249) 0:00:33.908 ******** 2026-04-09 02:50:06.109869 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-09 02:50:06.109885 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-09 02:50:06.109901 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-09 02:50:06.109916 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-09 02:50:06.109930 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-09 02:50:06.109963 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-09 02:50:16.009529 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-09 02:50:16.009636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-09 02:50:16.009648 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-09 02:50:16.009673 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-09 02:50:16.009683 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-09 02:50:16.009692 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-09 02:50:16.009701 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-09 02:50:16.009710 | orchestrator | 2026-04-09 02:50:16.009720 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:50:16.009729 | orchestrator | Thursday 09 April 2026 02:50:06 +0000 (0:00:00.405) 0:00:34.314 ******** 2026-04-09 02:50:16.009738 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:16.009749 | orchestrator | 2026-04-09 02:50:16.009758 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:50:16.009766 | orchestrator | Thursday 09 April 2026 02:50:06 +0000 (0:00:00.240) 0:00:34.554 ******** 2026-04-09 02:50:16.009775 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:16.009783 | orchestrator | 2026-04-09 02:50:16.009792 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:50:16.009801 | orchestrator | Thursday 09 April 2026 02:50:06 +0000 (0:00:00.208) 0:00:34.763 ******** 2026-04-09 02:50:16.009810 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:16.009818 | orchestrator | 2026-04-09 02:50:16.009827 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:50:16.009836 | orchestrator | Thursday 09 April 2026 02:50:06 +0000 (0:00:00.227) 0:00:34.990 ******** 2026-04-09 02:50:16.009844 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:16.009853 | orchestrator | 2026-04-09 02:50:16.009862 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:50:16.009871 | orchestrator | Thursday 09 April 2026 02:50:07 +0000 (0:00:00.704) 0:00:35.695 ******** 2026-04-09 02:50:16.009880 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:16.009888 | orchestrator | 2026-04-09 02:50:16.009897 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:50:16.009906 | orchestrator | Thursday 09 April 2026 02:50:07 +0000 (0:00:00.212) 0:00:35.908 ******** 2026-04-09 02:50:16.009934 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:16.009944 | orchestrator | 2026-04-09 02:50:16.009952 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:50:16.009963 | orchestrator | Thursday 09 April 2026 02:50:07 +0000 (0:00:00.258) 0:00:36.167 ******** 2026-04-09 02:50:16.009978 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:16.009992 | orchestrator | 2026-04-09 02:50:16.010006 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:50:16.010083 | orchestrator | Thursday 09 April 2026 02:50:08 +0000 (0:00:00.227) 0:00:36.395 ******** 2026-04-09 02:50:16.010099 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:16.010114 | orchestrator | 2026-04-09 02:50:16.010129 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:50:16.010144 | orchestrator | Thursday 09 April 2026 02:50:08 +0000 (0:00:00.232) 0:00:36.627 ******** 2026-04-09 02:50:16.010160 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965) 2026-04-09 02:50:16.010179 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965) 2026-04-09 02:50:16.010195 | orchestrator | 2026-04-09 02:50:16.010212 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:50:16.010228 | orchestrator | Thursday 09 April 2026 02:50:08 +0000 (0:00:00.464) 0:00:37.092 ******** 2026-04-09 02:50:16.010244 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e) 2026-04-09 02:50:16.010259 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e) 2026-04-09 02:50:16.010274 | orchestrator | 2026-04-09 02:50:16.010289 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:50:16.010304 | orchestrator | Thursday 09 April 2026 02:50:09 +0000 (0:00:00.553) 0:00:37.645 ******** 2026-04-09 02:50:16.010320 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4) 2026-04-09 02:50:16.010402 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4) 2026-04-09 02:50:16.010417 | orchestrator | 2026-04-09 02:50:16.010431 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:50:16.010445 | orchestrator | Thursday 09 April 2026 02:50:09 +0000 (0:00:00.507) 0:00:38.153 ******** 2026-04-09 02:50:16.010458 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d) 2026-04-09 02:50:16.010472 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d) 2026-04-09 02:50:16.010486 | orchestrator | 2026-04-09 02:50:16.010499 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:50:16.010513 | orchestrator | Thursday 09 April 2026 02:50:10 +0000 (0:00:00.562) 0:00:38.716 ******** 2026-04-09 02:50:16.010529 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-09 02:50:16.010543 | orchestrator | 2026-04-09 02:50:16.010557 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:50:16.010598 | orchestrator | Thursday 09 April 2026 02:50:10 +0000 (0:00:00.371) 0:00:39.087 ******** 2026-04-09 02:50:16.010614 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-09 02:50:16.010630 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-09 02:50:16.010641 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-09 02:50:16.010659 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-09 02:50:16.010668 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-09 02:50:16.010676 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-09 02:50:16.010697 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-09 02:50:16.010705 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-09 02:50:16.010714 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-09 02:50:16.010722 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-09 02:50:16.010731 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-09 02:50:16.010740 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-09 02:50:16.010748 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-09 02:50:16.010757 | orchestrator | 2026-04-09 02:50:16.010765 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:50:16.010774 | orchestrator | Thursday 09 April 2026 02:50:11 +0000 (0:00:00.678) 0:00:39.766 ******** 2026-04-09 02:50:16.010783 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:16.010791 | orchestrator | 2026-04-09 02:50:16.010800 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:50:16.010808 | orchestrator | Thursday 09 April 2026 02:50:11 +0000 (0:00:00.234) 0:00:40.000 ******** 2026-04-09 02:50:16.010817 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:16.010825 | orchestrator | 2026-04-09 02:50:16.010834 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:50:16.010842 | orchestrator | Thursday 09 April 2026 02:50:12 +0000 (0:00:00.228) 0:00:40.228 ******** 2026-04-09 02:50:16.010851 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:16.010859 | orchestrator | 2026-04-09 02:50:16.010868 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:50:16.010876 | orchestrator | Thursday 09 April 2026 02:50:12 +0000 (0:00:00.225) 0:00:40.453 ******** 2026-04-09 02:50:16.010885 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:16.010893 | orchestrator | 2026-04-09 02:50:16.010902 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:50:16.010911 | orchestrator | Thursday 09 April 2026 02:50:12 +0000 (0:00:00.243) 0:00:40.697 ******** 2026-04-09 02:50:16.010919 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:16.010928 | orchestrator | 2026-04-09 02:50:16.010937 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:50:16.010945 | orchestrator | Thursday 09 April 2026 02:50:12 +0000 (0:00:00.218) 0:00:40.916 ******** 2026-04-09 02:50:16.010954 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:16.010963 | orchestrator | 2026-04-09 02:50:16.010971 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:50:16.010980 | orchestrator | Thursday 09 April 2026 02:50:12 +0000 (0:00:00.242) 0:00:41.158 ******** 2026-04-09 02:50:16.010988 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:16.010997 | orchestrator | 2026-04-09 02:50:16.011005 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:50:16.011014 | orchestrator | Thursday 09 April 2026 02:50:13 +0000 (0:00:00.223) 0:00:41.381 ******** 2026-04-09 02:50:16.011023 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:16.011031 | orchestrator | 2026-04-09 02:50:16.011040 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:50:16.011049 | orchestrator | Thursday 09 April 2026 02:50:13 +0000 (0:00:00.246) 0:00:41.628 ******** 2026-04-09 02:50:16.011057 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-09 02:50:16.011066 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-09 02:50:16.011075 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-09 02:50:16.011084 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-09 02:50:16.011092 | orchestrator | 2026-04-09 02:50:16.011107 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:50:16.011116 | orchestrator | Thursday 09 April 2026 02:50:14 +0000 (0:00:00.991) 0:00:42.619 ******** 2026-04-09 02:50:16.011124 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:16.011133 | orchestrator | 2026-04-09 02:50:16.011141 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:50:16.011150 | orchestrator | Thursday 09 April 2026 02:50:14 +0000 (0:00:00.225) 0:00:42.845 ******** 2026-04-09 02:50:16.011159 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:16.011167 | orchestrator | 2026-04-09 02:50:16.011176 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:50:16.011185 | orchestrator | Thursday 09 April 2026 02:50:14 +0000 (0:00:00.271) 0:00:43.117 ******** 2026-04-09 02:50:16.011193 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:16.011202 | orchestrator | 2026-04-09 02:50:16.011210 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:50:16.011219 | orchestrator | Thursday 09 April 2026 02:50:15 +0000 (0:00:00.849) 0:00:43.967 ******** 2026-04-09 02:50:16.011228 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:16.011236 | orchestrator | 2026-04-09 02:50:16.011250 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-09 02:50:20.365889 | orchestrator | Thursday 09 April 2026 02:50:16 +0000 (0:00:00.251) 0:00:44.218 ******** 2026-04-09 02:50:20.365987 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-09 02:50:20.365997 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-09 02:50:20.366003 | orchestrator | 2026-04-09 02:50:20.366010 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-09 02:50:20.366073 | orchestrator | Thursday 09 April 2026 02:50:16 +0000 (0:00:00.185) 0:00:44.404 ******** 2026-04-09 02:50:20.366081 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:20.366086 | orchestrator | 2026-04-09 02:50:20.366092 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-09 02:50:20.366097 | orchestrator | Thursday 09 April 2026 02:50:16 +0000 (0:00:00.164) 0:00:44.569 ******** 2026-04-09 02:50:20.366102 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:20.366107 | orchestrator | 2026-04-09 02:50:20.366113 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-09 02:50:20.366118 | orchestrator | Thursday 09 April 2026 02:50:16 +0000 (0:00:00.147) 0:00:44.717 ******** 2026-04-09 02:50:20.366123 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:20.366128 | orchestrator | 2026-04-09 02:50:20.366133 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-09 02:50:20.366138 | orchestrator | Thursday 09 April 2026 02:50:16 +0000 (0:00:00.160) 0:00:44.877 ******** 2026-04-09 02:50:20.366143 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:50:20.366150 | orchestrator | 2026-04-09 02:50:20.366155 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-09 02:50:20.366160 | orchestrator | Thursday 09 April 2026 02:50:16 +0000 (0:00:00.147) 0:00:45.025 ******** 2026-04-09 02:50:20.366165 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '27c4b53f-c2bf-5253-84b2-9319684e0f9e'}}) 2026-04-09 02:50:20.366171 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'}}) 2026-04-09 02:50:20.366176 | orchestrator | 2026-04-09 02:50:20.366182 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-09 02:50:20.366186 | orchestrator | Thursday 09 April 2026 02:50:16 +0000 (0:00:00.171) 0:00:45.196 ******** 2026-04-09 02:50:20.366192 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '27c4b53f-c2bf-5253-84b2-9319684e0f9e'}})  2026-04-09 02:50:20.366199 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'}})  2026-04-09 02:50:20.366220 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:20.366226 | orchestrator | 2026-04-09 02:50:20.366231 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-09 02:50:20.366236 | orchestrator | Thursday 09 April 2026 02:50:17 +0000 (0:00:00.172) 0:00:45.368 ******** 2026-04-09 02:50:20.366241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '27c4b53f-c2bf-5253-84b2-9319684e0f9e'}})  2026-04-09 02:50:20.366246 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'}})  2026-04-09 02:50:20.366251 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:20.366256 | orchestrator | 2026-04-09 02:50:20.366261 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-09 02:50:20.366266 | orchestrator | Thursday 09 April 2026 02:50:17 +0000 (0:00:00.162) 0:00:45.530 ******** 2026-04-09 02:50:20.366271 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '27c4b53f-c2bf-5253-84b2-9319684e0f9e'}})  2026-04-09 02:50:20.366276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'}})  2026-04-09 02:50:20.366281 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:20.366286 | orchestrator | 2026-04-09 02:50:20.366292 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-09 02:50:20.366297 | orchestrator | Thursday 09 April 2026 02:50:17 +0000 (0:00:00.167) 0:00:45.698 ******** 2026-04-09 02:50:20.366302 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:50:20.366307 | orchestrator | 2026-04-09 02:50:20.366312 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-09 02:50:20.366317 | orchestrator | Thursday 09 April 2026 02:50:17 +0000 (0:00:00.158) 0:00:45.857 ******** 2026-04-09 02:50:20.366362 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:50:20.366369 | orchestrator | 2026-04-09 02:50:20.366374 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-09 02:50:20.366379 | orchestrator | Thursday 09 April 2026 02:50:18 +0000 (0:00:00.376) 0:00:46.233 ******** 2026-04-09 02:50:20.366385 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:20.366394 | orchestrator | 2026-04-09 02:50:20.366401 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-09 02:50:20.366409 | orchestrator | Thursday 09 April 2026 02:50:18 +0000 (0:00:00.140) 0:00:46.374 ******** 2026-04-09 02:50:20.366417 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:20.366425 | orchestrator | 2026-04-09 02:50:20.366432 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-09 02:50:20.366440 | orchestrator | Thursday 09 April 2026 02:50:18 +0000 (0:00:00.145) 0:00:46.520 ******** 2026-04-09 02:50:20.366449 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:20.366457 | orchestrator | 2026-04-09 02:50:20.366465 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-09 02:50:20.366475 | orchestrator | Thursday 09 April 2026 02:50:18 +0000 (0:00:00.141) 0:00:46.661 ******** 2026-04-09 02:50:20.366482 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 02:50:20.366487 | orchestrator |  "ceph_osd_devices": { 2026-04-09 02:50:20.366494 | orchestrator |  "sdb": { 2026-04-09 02:50:20.366514 | orchestrator |  "osd_lvm_uuid": "27c4b53f-c2bf-5253-84b2-9319684e0f9e" 2026-04-09 02:50:20.366520 | orchestrator |  }, 2026-04-09 02:50:20.366526 | orchestrator |  "sdc": { 2026-04-09 02:50:20.366532 | orchestrator |  "osd_lvm_uuid": "07250cb7-fce6-51fa-be28-6bf5f5cf4ef6" 2026-04-09 02:50:20.366538 | orchestrator |  } 2026-04-09 02:50:20.366544 | orchestrator |  } 2026-04-09 02:50:20.366550 | orchestrator | } 2026-04-09 02:50:20.366556 | orchestrator | 2026-04-09 02:50:20.366566 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-09 02:50:20.366573 | orchestrator | Thursday 09 April 2026 02:50:18 +0000 (0:00:00.150) 0:00:46.812 ******** 2026-04-09 02:50:20.366578 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:20.366591 | orchestrator | 2026-04-09 02:50:20.366597 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-09 02:50:20.366603 | orchestrator | Thursday 09 April 2026 02:50:18 +0000 (0:00:00.152) 0:00:46.964 ******** 2026-04-09 02:50:20.366608 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:20.366613 | orchestrator | 2026-04-09 02:50:20.366618 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-09 02:50:20.366623 | orchestrator | Thursday 09 April 2026 02:50:18 +0000 (0:00:00.142) 0:00:47.107 ******** 2026-04-09 02:50:20.366628 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:50:20.366633 | orchestrator | 2026-04-09 02:50:20.366638 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-09 02:50:20.366643 | orchestrator | Thursday 09 April 2026 02:50:19 +0000 (0:00:00.152) 0:00:47.259 ******** 2026-04-09 02:50:20.366648 | orchestrator | changed: [testbed-node-5] => { 2026-04-09 02:50:20.366653 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-09 02:50:20.366658 | orchestrator |  "ceph_osd_devices": { 2026-04-09 02:50:20.366664 | orchestrator |  "sdb": { 2026-04-09 02:50:20.366668 | orchestrator |  "osd_lvm_uuid": "27c4b53f-c2bf-5253-84b2-9319684e0f9e" 2026-04-09 02:50:20.366674 | orchestrator |  }, 2026-04-09 02:50:20.366679 | orchestrator |  "sdc": { 2026-04-09 02:50:20.366684 | orchestrator |  "osd_lvm_uuid": "07250cb7-fce6-51fa-be28-6bf5f5cf4ef6" 2026-04-09 02:50:20.366689 | orchestrator |  } 2026-04-09 02:50:20.366694 | orchestrator |  }, 2026-04-09 02:50:20.366699 | orchestrator |  "lvm_volumes": [ 2026-04-09 02:50:20.366704 | orchestrator |  { 2026-04-09 02:50:20.366709 | orchestrator |  "data": "osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e", 2026-04-09 02:50:20.366714 | orchestrator |  "data_vg": "ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e" 2026-04-09 02:50:20.366719 | orchestrator |  }, 2026-04-09 02:50:20.366724 | orchestrator |  { 2026-04-09 02:50:20.366729 | orchestrator |  "data": "osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6", 2026-04-09 02:50:20.366734 | orchestrator |  "data_vg": "ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6" 2026-04-09 02:50:20.366739 | orchestrator |  } 2026-04-09 02:50:20.366744 | orchestrator |  ] 2026-04-09 02:50:20.366749 | orchestrator |  } 2026-04-09 02:50:20.366754 | orchestrator | } 2026-04-09 02:50:20.366759 | orchestrator | 2026-04-09 02:50:20.366764 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-09 02:50:20.366769 | orchestrator | Thursday 09 April 2026 02:50:19 +0000 (0:00:00.231) 0:00:47.491 ******** 2026-04-09 02:50:20.366774 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-09 02:50:20.366779 | orchestrator | 2026-04-09 02:50:20.366784 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:50:20.366789 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-09 02:50:20.366795 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-09 02:50:20.366800 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-09 02:50:20.366805 | orchestrator | 2026-04-09 02:50:20.366810 | orchestrator | 2026-04-09 02:50:20.366815 | orchestrator | 2026-04-09 02:50:20.366820 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:50:20.366825 | orchestrator | Thursday 09 April 2026 02:50:20 +0000 (0:00:01.067) 0:00:48.558 ******** 2026-04-09 02:50:20.366830 | orchestrator | =============================================================================== 2026-04-09 02:50:20.366835 | orchestrator | Write configuration file ------------------------------------------------ 4.33s 2026-04-09 02:50:20.366844 | orchestrator | Add known partitions to the list of available block devices ------------- 2.06s 2026-04-09 02:50:20.366849 | orchestrator | Add known links to the list of available block devices ------------------ 1.44s 2026-04-09 02:50:20.366856 | orchestrator | Add known partitions to the list of available block devices ------------- 1.25s 2026-04-09 02:50:20.366864 | orchestrator | Print configuration data ------------------------------------------------ 1.21s 2026-04-09 02:50:20.366872 | orchestrator | Add known partitions to the list of available block devices ------------- 1.06s 2026-04-09 02:50:20.366880 | orchestrator | Add known links to the list of available block devices ------------------ 1.00s 2026-04-09 02:50:20.366888 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2026-04-09 02:50:20.366896 | orchestrator | Add known partitions to the list of available block devices ------------- 0.85s 2026-04-09 02:50:20.366905 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.84s 2026-04-09 02:50:20.366913 | orchestrator | Get initial list of available block devices ----------------------------- 0.82s 2026-04-09 02:50:20.366918 | orchestrator | Add known links to the list of available block devices ------------------ 0.77s 2026-04-09 02:50:20.366923 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.76s 2026-04-09 02:50:20.366932 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2026-04-09 02:50:20.839648 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2026-04-09 02:50:20.839749 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.74s 2026-04-09 02:50:20.839765 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-04-09 02:50:20.839796 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-04-09 02:50:20.839807 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-04-09 02:50:20.839818 | orchestrator | Set OSD devices config data --------------------------------------------- 0.69s 2026-04-09 02:50:43.680585 | orchestrator | 2026-04-09 02:50:43 | INFO  | Task 21b6743d-95c3-41eb-9ebb-0a93a2485775 (sync inventory) is running in background. Output coming soon. 2026-04-09 02:51:16.177071 | orchestrator | 2026-04-09 02:50:45 | INFO  | Starting group_vars file reorganization 2026-04-09 02:51:16.177200 | orchestrator | 2026-04-09 02:50:45 | INFO  | Moved 0 file(s) to their respective directories 2026-04-09 02:51:16.177227 | orchestrator | 2026-04-09 02:50:45 | INFO  | Group_vars file reorganization completed 2026-04-09 02:51:16.177244 | orchestrator | 2026-04-09 02:50:48 | INFO  | Starting variable preparation from inventory 2026-04-09 02:51:16.177259 | orchestrator | 2026-04-09 02:50:51 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-09 02:51:16.177274 | orchestrator | 2026-04-09 02:50:51 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-09 02:51:16.177288 | orchestrator | 2026-04-09 02:50:51 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-09 02:51:16.177394 | orchestrator | 2026-04-09 02:50:51 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-09 02:51:16.177411 | orchestrator | 2026-04-09 02:50:51 | INFO  | Variable preparation completed 2026-04-09 02:51:16.177427 | orchestrator | 2026-04-09 02:50:53 | INFO  | Starting inventory overwrite handling 2026-04-09 02:51:16.177441 | orchestrator | 2026-04-09 02:50:53 | INFO  | Handling group overwrites in 99-overwrite 2026-04-09 02:51:16.177456 | orchestrator | 2026-04-09 02:50:53 | INFO  | Removing group frr:children from 60-generic 2026-04-09 02:51:16.177471 | orchestrator | 2026-04-09 02:50:53 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-09 02:51:16.177487 | orchestrator | 2026-04-09 02:50:53 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-09 02:51:16.177538 | orchestrator | 2026-04-09 02:50:53 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-09 02:51:16.177554 | orchestrator | 2026-04-09 02:50:53 | INFO  | Handling group overwrites in 20-roles 2026-04-09 02:51:16.177569 | orchestrator | 2026-04-09 02:50:53 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-09 02:51:16.177585 | orchestrator | 2026-04-09 02:50:53 | INFO  | Removed 5 group(s) in total 2026-04-09 02:51:16.177594 | orchestrator | 2026-04-09 02:50:53 | INFO  | Inventory overwrite handling completed 2026-04-09 02:51:16.177605 | orchestrator | 2026-04-09 02:50:55 | INFO  | Starting merge of inventory files 2026-04-09 02:51:16.177616 | orchestrator | 2026-04-09 02:50:55 | INFO  | Inventory files merged successfully 2026-04-09 02:51:16.177625 | orchestrator | 2026-04-09 02:51:01 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-09 02:51:16.177635 | orchestrator | 2026-04-09 02:51:14 | INFO  | Successfully wrote ClusterShell configuration 2026-04-09 02:51:16.177645 | orchestrator | [master 31fc4ad] 2026-04-09-02-51 2026-04-09 02:51:16.177657 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-04-09 02:51:18.746988 | orchestrator | 2026-04-09 02:51:18 | INFO  | Task ee0d0257-afa2-49d0-97df-604b6e402a0b (ceph-create-lvm-devices) was prepared for execution. 2026-04-09 02:51:18.747122 | orchestrator | 2026-04-09 02:51:18 | INFO  | It takes a moment until task ee0d0257-afa2-49d0-97df-604b6e402a0b (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-09 02:51:32.350223 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-09 02:51:32.350369 | orchestrator | 2.16.14 2026-04-09 02:51:32.350383 | orchestrator | 2026-04-09 02:51:32.350392 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-09 02:51:32.350400 | orchestrator | 2026-04-09 02:51:32.350407 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 02:51:32.350414 | orchestrator | Thursday 09 April 2026 02:51:23 +0000 (0:00:00.345) 0:00:00.345 ******** 2026-04-09 02:51:32.350422 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 02:51:32.350429 | orchestrator | 2026-04-09 02:51:32.350436 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-09 02:51:32.350442 | orchestrator | Thursday 09 April 2026 02:51:24 +0000 (0:00:00.299) 0:00:00.645 ******** 2026-04-09 02:51:32.350449 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:51:32.350456 | orchestrator | 2026-04-09 02:51:32.350463 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:51:32.350470 | orchestrator | Thursday 09 April 2026 02:51:24 +0000 (0:00:00.230) 0:00:00.875 ******** 2026-04-09 02:51:32.350477 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-09 02:51:32.350496 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-09 02:51:32.350503 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-09 02:51:32.350510 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-09 02:51:32.350517 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-09 02:51:32.350523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-09 02:51:32.350530 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-09 02:51:32.350537 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-09 02:51:32.350543 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-09 02:51:32.350550 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-09 02:51:32.350575 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-09 02:51:32.350582 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-09 02:51:32.350588 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-09 02:51:32.350595 | orchestrator | 2026-04-09 02:51:32.350602 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:51:32.350608 | orchestrator | Thursday 09 April 2026 02:51:25 +0000 (0:00:00.716) 0:00:01.591 ******** 2026-04-09 02:51:32.350615 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:32.350622 | orchestrator | 2026-04-09 02:51:32.350628 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:51:32.350635 | orchestrator | Thursday 09 April 2026 02:51:25 +0000 (0:00:00.220) 0:00:01.812 ******** 2026-04-09 02:51:32.350641 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:32.350648 | orchestrator | 2026-04-09 02:51:32.350655 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:51:32.350661 | orchestrator | Thursday 09 April 2026 02:51:25 +0000 (0:00:00.199) 0:00:02.012 ******** 2026-04-09 02:51:32.350668 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:32.350674 | orchestrator | 2026-04-09 02:51:32.350681 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:51:32.350687 | orchestrator | Thursday 09 April 2026 02:51:25 +0000 (0:00:00.216) 0:00:02.228 ******** 2026-04-09 02:51:32.350694 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:32.350700 | orchestrator | 2026-04-09 02:51:32.350707 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:51:32.350714 | orchestrator | Thursday 09 April 2026 02:51:25 +0000 (0:00:00.219) 0:00:02.448 ******** 2026-04-09 02:51:32.350720 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:32.350727 | orchestrator | 2026-04-09 02:51:32.350733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:51:32.350740 | orchestrator | Thursday 09 April 2026 02:51:26 +0000 (0:00:00.226) 0:00:02.674 ******** 2026-04-09 02:51:32.350747 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:32.350754 | orchestrator | 2026-04-09 02:51:32.350760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:51:32.350767 | orchestrator | Thursday 09 April 2026 02:51:26 +0000 (0:00:00.220) 0:00:02.895 ******** 2026-04-09 02:51:32.350774 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:32.350780 | orchestrator | 2026-04-09 02:51:32.350788 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:51:32.350796 | orchestrator | Thursday 09 April 2026 02:51:26 +0000 (0:00:00.231) 0:00:03.127 ******** 2026-04-09 02:51:32.350803 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:32.350811 | orchestrator | 2026-04-09 02:51:32.350818 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:51:32.350825 | orchestrator | Thursday 09 April 2026 02:51:26 +0000 (0:00:00.248) 0:00:03.375 ******** 2026-04-09 02:51:32.350833 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411) 2026-04-09 02:51:32.350842 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411) 2026-04-09 02:51:32.350850 | orchestrator | 2026-04-09 02:51:32.350857 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:51:32.350878 | orchestrator | Thursday 09 April 2026 02:51:27 +0000 (0:00:00.441) 0:00:03.816 ******** 2026-04-09 02:51:32.350887 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761) 2026-04-09 02:51:32.350895 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761) 2026-04-09 02:51:32.350902 | orchestrator | 2026-04-09 02:51:32.350910 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:51:32.350923 | orchestrator | Thursday 09 April 2026 02:51:27 +0000 (0:00:00.703) 0:00:04.520 ******** 2026-04-09 02:51:32.350930 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad) 2026-04-09 02:51:32.350939 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad) 2026-04-09 02:51:32.350946 | orchestrator | 2026-04-09 02:51:32.350953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:51:32.350961 | orchestrator | Thursday 09 April 2026 02:51:28 +0000 (0:00:00.736) 0:00:05.256 ******** 2026-04-09 02:51:32.350969 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be) 2026-04-09 02:51:32.350981 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be) 2026-04-09 02:51:32.350989 | orchestrator | 2026-04-09 02:51:32.350996 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:51:32.351005 | orchestrator | Thursday 09 April 2026 02:51:29 +0000 (0:00:01.170) 0:00:06.426 ******** 2026-04-09 02:51:32.351012 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-09 02:51:32.351020 | orchestrator | 2026-04-09 02:51:32.351028 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:51:32.351036 | orchestrator | Thursday 09 April 2026 02:51:30 +0000 (0:00:00.388) 0:00:06.815 ******** 2026-04-09 02:51:32.351043 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-09 02:51:32.351050 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-09 02:51:32.351058 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-09 02:51:32.351066 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-09 02:51:32.351077 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-09 02:51:32.351087 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-09 02:51:32.351099 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-09 02:51:32.351114 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-09 02:51:32.351129 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-09 02:51:32.351140 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-09 02:51:32.351151 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-09 02:51:32.351162 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-09 02:51:32.351173 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-09 02:51:32.351183 | orchestrator | 2026-04-09 02:51:32.351193 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:51:32.351203 | orchestrator | Thursday 09 April 2026 02:51:30 +0000 (0:00:00.515) 0:00:07.330 ******** 2026-04-09 02:51:32.351214 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:32.351224 | orchestrator | 2026-04-09 02:51:32.351235 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:51:32.351246 | orchestrator | Thursday 09 April 2026 02:51:30 +0000 (0:00:00.230) 0:00:07.561 ******** 2026-04-09 02:51:32.351256 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:32.351267 | orchestrator | 2026-04-09 02:51:32.351277 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:51:32.351319 | orchestrator | Thursday 09 April 2026 02:51:31 +0000 (0:00:00.248) 0:00:07.809 ******** 2026-04-09 02:51:32.351330 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:32.351348 | orchestrator | 2026-04-09 02:51:32.351358 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:51:32.351369 | orchestrator | Thursday 09 April 2026 02:51:31 +0000 (0:00:00.218) 0:00:08.028 ******** 2026-04-09 02:51:32.351380 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:32.351392 | orchestrator | 2026-04-09 02:51:32.351399 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:51:32.351405 | orchestrator | Thursday 09 April 2026 02:51:31 +0000 (0:00:00.217) 0:00:08.246 ******** 2026-04-09 02:51:32.351412 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:32.351418 | orchestrator | 2026-04-09 02:51:32.351425 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:51:32.351431 | orchestrator | Thursday 09 April 2026 02:51:31 +0000 (0:00:00.208) 0:00:08.455 ******** 2026-04-09 02:51:32.351438 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:32.351444 | orchestrator | 2026-04-09 02:51:32.351451 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:51:32.351457 | orchestrator | Thursday 09 April 2026 02:51:32 +0000 (0:00:00.225) 0:00:08.680 ******** 2026-04-09 02:51:32.351463 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:32.351470 | orchestrator | 2026-04-09 02:51:32.351485 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:51:41.154006 | orchestrator | Thursday 09 April 2026 02:51:32 +0000 (0:00:00.250) 0:00:08.931 ******** 2026-04-09 02:51:41.154117 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:41.154126 | orchestrator | 2026-04-09 02:51:41.154131 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:51:41.154136 | orchestrator | Thursday 09 April 2026 02:51:33 +0000 (0:00:00.739) 0:00:09.671 ******** 2026-04-09 02:51:41.154141 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-09 02:51:41.154146 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-09 02:51:41.154150 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-09 02:51:41.154154 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-09 02:51:41.154158 | orchestrator | 2026-04-09 02:51:41.154162 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:51:41.154166 | orchestrator | Thursday 09 April 2026 02:51:33 +0000 (0:00:00.816) 0:00:10.487 ******** 2026-04-09 02:51:41.154170 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:41.154174 | orchestrator | 2026-04-09 02:51:41.154177 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:51:41.154181 | orchestrator | Thursday 09 April 2026 02:51:34 +0000 (0:00:00.282) 0:00:10.770 ******** 2026-04-09 02:51:41.154185 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:41.154189 | orchestrator | 2026-04-09 02:51:41.154202 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:51:41.154206 | orchestrator | Thursday 09 April 2026 02:51:34 +0000 (0:00:00.222) 0:00:10.992 ******** 2026-04-09 02:51:41.154210 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:41.154214 | orchestrator | 2026-04-09 02:51:41.154217 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:51:41.154221 | orchestrator | Thursday 09 April 2026 02:51:34 +0000 (0:00:00.231) 0:00:11.223 ******** 2026-04-09 02:51:41.154225 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:41.154229 | orchestrator | 2026-04-09 02:51:41.154233 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-09 02:51:41.154237 | orchestrator | Thursday 09 April 2026 02:51:34 +0000 (0:00:00.213) 0:00:11.437 ******** 2026-04-09 02:51:41.154240 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:41.154244 | orchestrator | 2026-04-09 02:51:41.154248 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-09 02:51:41.154252 | orchestrator | Thursday 09 April 2026 02:51:34 +0000 (0:00:00.144) 0:00:11.581 ******** 2026-04-09 02:51:41.154257 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2f59a7c8-f88e-51a3-9620-37640e0ff9b5'}}) 2026-04-09 02:51:41.154277 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1db77c01-2d77-5e1e-8d0a-4e535706b141'}}) 2026-04-09 02:51:41.154317 | orchestrator | 2026-04-09 02:51:41.154323 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-09 02:51:41.154328 | orchestrator | Thursday 09 April 2026 02:51:35 +0000 (0:00:00.232) 0:00:11.814 ******** 2026-04-09 02:51:41.154333 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'}) 2026-04-09 02:51:41.154338 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'}) 2026-04-09 02:51:41.154342 | orchestrator | 2026-04-09 02:51:41.154346 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-09 02:51:41.154350 | orchestrator | Thursday 09 April 2026 02:51:37 +0000 (0:00:02.031) 0:00:13.845 ******** 2026-04-09 02:51:41.154354 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'})  2026-04-09 02:51:41.154358 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'})  2026-04-09 02:51:41.154362 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:41.154366 | orchestrator | 2026-04-09 02:51:41.154370 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-09 02:51:41.154374 | orchestrator | Thursday 09 April 2026 02:51:37 +0000 (0:00:00.163) 0:00:14.009 ******** 2026-04-09 02:51:41.154377 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'}) 2026-04-09 02:51:41.154381 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'}) 2026-04-09 02:51:41.154385 | orchestrator | 2026-04-09 02:51:41.154389 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-09 02:51:41.154392 | orchestrator | Thursday 09 April 2026 02:51:38 +0000 (0:00:01.528) 0:00:15.537 ******** 2026-04-09 02:51:41.154396 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'})  2026-04-09 02:51:41.154400 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'})  2026-04-09 02:51:41.154404 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:41.154408 | orchestrator | 2026-04-09 02:51:41.154421 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-09 02:51:41.154431 | orchestrator | Thursday 09 April 2026 02:51:39 +0000 (0:00:00.223) 0:00:15.760 ******** 2026-04-09 02:51:41.154446 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:41.154450 | orchestrator | 2026-04-09 02:51:41.154454 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-09 02:51:41.154457 | orchestrator | Thursday 09 April 2026 02:51:39 +0000 (0:00:00.373) 0:00:16.134 ******** 2026-04-09 02:51:41.154461 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'})  2026-04-09 02:51:41.154465 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'})  2026-04-09 02:51:41.154469 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:41.154473 | orchestrator | 2026-04-09 02:51:41.154476 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-09 02:51:41.154480 | orchestrator | Thursday 09 April 2026 02:51:39 +0000 (0:00:00.152) 0:00:16.286 ******** 2026-04-09 02:51:41.154488 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:41.154492 | orchestrator | 2026-04-09 02:51:41.154496 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-09 02:51:41.154500 | orchestrator | Thursday 09 April 2026 02:51:39 +0000 (0:00:00.158) 0:00:16.444 ******** 2026-04-09 02:51:41.154507 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'})  2026-04-09 02:51:41.154511 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'})  2026-04-09 02:51:41.154515 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:41.154519 | orchestrator | 2026-04-09 02:51:41.154523 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-09 02:51:41.154527 | orchestrator | Thursday 09 April 2026 02:51:40 +0000 (0:00:00.172) 0:00:16.617 ******** 2026-04-09 02:51:41.154530 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:41.154534 | orchestrator | 2026-04-09 02:51:41.154538 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-09 02:51:41.154542 | orchestrator | Thursday 09 April 2026 02:51:40 +0000 (0:00:00.144) 0:00:16.761 ******** 2026-04-09 02:51:41.154545 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'})  2026-04-09 02:51:41.154549 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'})  2026-04-09 02:51:41.154553 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:41.154557 | orchestrator | 2026-04-09 02:51:41.154561 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-09 02:51:41.154564 | orchestrator | Thursday 09 April 2026 02:51:40 +0000 (0:00:00.172) 0:00:16.934 ******** 2026-04-09 02:51:41.154568 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:51:41.154573 | orchestrator | 2026-04-09 02:51:41.154577 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-09 02:51:41.154582 | orchestrator | Thursday 09 April 2026 02:51:40 +0000 (0:00:00.160) 0:00:17.094 ******** 2026-04-09 02:51:41.154586 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'})  2026-04-09 02:51:41.154591 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'})  2026-04-09 02:51:41.154595 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:41.154600 | orchestrator | 2026-04-09 02:51:41.154604 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-09 02:51:41.154608 | orchestrator | Thursday 09 April 2026 02:51:40 +0000 (0:00:00.157) 0:00:17.251 ******** 2026-04-09 02:51:41.154613 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'})  2026-04-09 02:51:41.154617 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'})  2026-04-09 02:51:41.154622 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:41.154626 | orchestrator | 2026-04-09 02:51:41.154630 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-09 02:51:41.154635 | orchestrator | Thursday 09 April 2026 02:51:40 +0000 (0:00:00.161) 0:00:17.413 ******** 2026-04-09 02:51:41.154639 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'})  2026-04-09 02:51:41.154643 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'})  2026-04-09 02:51:41.154651 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:41.154655 | orchestrator | 2026-04-09 02:51:41.154659 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-09 02:51:41.154664 | orchestrator | Thursday 09 April 2026 02:51:40 +0000 (0:00:00.174) 0:00:17.587 ******** 2026-04-09 02:51:41.154668 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:41.154672 | orchestrator | 2026-04-09 02:51:41.154677 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-09 02:51:41.154684 | orchestrator | Thursday 09 April 2026 02:51:41 +0000 (0:00:00.152) 0:00:17.740 ******** 2026-04-09 02:51:48.134445 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:48.134586 | orchestrator | 2026-04-09 02:51:48.134600 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-09 02:51:48.134609 | orchestrator | Thursday 09 April 2026 02:51:41 +0000 (0:00:00.134) 0:00:17.874 ******** 2026-04-09 02:51:48.134616 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:48.134624 | orchestrator | 2026-04-09 02:51:48.134628 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-09 02:51:48.134633 | orchestrator | Thursday 09 April 2026 02:51:41 +0000 (0:00:00.411) 0:00:18.285 ******** 2026-04-09 02:51:48.134637 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 02:51:48.134642 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-09 02:51:48.134646 | orchestrator | } 2026-04-09 02:51:48.134651 | orchestrator | 2026-04-09 02:51:48.134655 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-09 02:51:48.134659 | orchestrator | Thursday 09 April 2026 02:51:41 +0000 (0:00:00.149) 0:00:18.435 ******** 2026-04-09 02:51:48.134663 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 02:51:48.134667 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-09 02:51:48.134670 | orchestrator | } 2026-04-09 02:51:48.134674 | orchestrator | 2026-04-09 02:51:48.134679 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-09 02:51:48.134700 | orchestrator | Thursday 09 April 2026 02:51:42 +0000 (0:00:00.164) 0:00:18.600 ******** 2026-04-09 02:51:48.134707 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 02:51:48.134713 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-09 02:51:48.134719 | orchestrator | } 2026-04-09 02:51:48.134725 | orchestrator | 2026-04-09 02:51:48.134731 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-09 02:51:48.134737 | orchestrator | Thursday 09 April 2026 02:51:42 +0000 (0:00:00.148) 0:00:18.748 ******** 2026-04-09 02:51:48.134742 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:51:48.134748 | orchestrator | 2026-04-09 02:51:48.134755 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-09 02:51:48.134761 | orchestrator | Thursday 09 April 2026 02:51:42 +0000 (0:00:00.731) 0:00:19.480 ******** 2026-04-09 02:51:48.134767 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:51:48.134799 | orchestrator | 2026-04-09 02:51:48.134806 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-09 02:51:48.134812 | orchestrator | Thursday 09 April 2026 02:51:43 +0000 (0:00:00.568) 0:00:20.049 ******** 2026-04-09 02:51:48.134818 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:51:48.134825 | orchestrator | 2026-04-09 02:51:48.134831 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-09 02:51:48.134837 | orchestrator | Thursday 09 April 2026 02:51:43 +0000 (0:00:00.522) 0:00:20.572 ******** 2026-04-09 02:51:48.134844 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:51:48.134851 | orchestrator | 2026-04-09 02:51:48.134857 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-09 02:51:48.134863 | orchestrator | Thursday 09 April 2026 02:51:44 +0000 (0:00:00.159) 0:00:20.732 ******** 2026-04-09 02:51:48.134870 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:48.134874 | orchestrator | 2026-04-09 02:51:48.134878 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-09 02:51:48.134901 | orchestrator | Thursday 09 April 2026 02:51:44 +0000 (0:00:00.123) 0:00:20.855 ******** 2026-04-09 02:51:48.134908 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:48.134914 | orchestrator | 2026-04-09 02:51:48.134921 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-09 02:51:48.134927 | orchestrator | Thursday 09 April 2026 02:51:44 +0000 (0:00:00.123) 0:00:20.979 ******** 2026-04-09 02:51:48.134933 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 02:51:48.134940 | orchestrator |  "vgs_report": { 2026-04-09 02:51:48.134947 | orchestrator |  "vg": [] 2026-04-09 02:51:48.134954 | orchestrator |  } 2026-04-09 02:51:48.134961 | orchestrator | } 2026-04-09 02:51:48.134968 | orchestrator | 2026-04-09 02:51:48.134972 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-09 02:51:48.134977 | orchestrator | Thursday 09 April 2026 02:51:44 +0000 (0:00:00.192) 0:00:21.171 ******** 2026-04-09 02:51:48.134981 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:48.134986 | orchestrator | 2026-04-09 02:51:48.134990 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-09 02:51:48.134994 | orchestrator | Thursday 09 April 2026 02:51:44 +0000 (0:00:00.140) 0:00:21.312 ******** 2026-04-09 02:51:48.134999 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:48.135003 | orchestrator | 2026-04-09 02:51:48.135007 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-09 02:51:48.135012 | orchestrator | Thursday 09 April 2026 02:51:45 +0000 (0:00:00.394) 0:00:21.706 ******** 2026-04-09 02:51:48.135016 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:48.135020 | orchestrator | 2026-04-09 02:51:48.135024 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-09 02:51:48.135029 | orchestrator | Thursday 09 April 2026 02:51:45 +0000 (0:00:00.150) 0:00:21.856 ******** 2026-04-09 02:51:48.135033 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:48.135037 | orchestrator | 2026-04-09 02:51:48.135041 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-09 02:51:48.135046 | orchestrator | Thursday 09 April 2026 02:51:45 +0000 (0:00:00.148) 0:00:22.005 ******** 2026-04-09 02:51:48.135050 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:48.135054 | orchestrator | 2026-04-09 02:51:48.135059 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-09 02:51:48.135063 | orchestrator | Thursday 09 April 2026 02:51:45 +0000 (0:00:00.140) 0:00:22.146 ******** 2026-04-09 02:51:48.135067 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:48.135070 | orchestrator | 2026-04-09 02:51:48.135074 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-09 02:51:48.135078 | orchestrator | Thursday 09 April 2026 02:51:45 +0000 (0:00:00.144) 0:00:22.290 ******** 2026-04-09 02:51:48.135082 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:48.135085 | orchestrator | 2026-04-09 02:51:48.135089 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-09 02:51:48.135093 | orchestrator | Thursday 09 April 2026 02:51:45 +0000 (0:00:00.151) 0:00:22.442 ******** 2026-04-09 02:51:48.135109 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:48.135113 | orchestrator | 2026-04-09 02:51:48.135117 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-09 02:51:48.135121 | orchestrator | Thursday 09 April 2026 02:51:46 +0000 (0:00:00.158) 0:00:22.601 ******** 2026-04-09 02:51:48.135125 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:48.135128 | orchestrator | 2026-04-09 02:51:48.135132 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-09 02:51:48.135136 | orchestrator | Thursday 09 April 2026 02:51:46 +0000 (0:00:00.144) 0:00:22.745 ******** 2026-04-09 02:51:48.135140 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:48.135143 | orchestrator | 2026-04-09 02:51:48.135147 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-09 02:51:48.135151 | orchestrator | Thursday 09 April 2026 02:51:46 +0000 (0:00:00.146) 0:00:22.892 ******** 2026-04-09 02:51:48.135159 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:48.135163 | orchestrator | 2026-04-09 02:51:48.135166 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-09 02:51:48.135170 | orchestrator | Thursday 09 April 2026 02:51:46 +0000 (0:00:00.141) 0:00:23.034 ******** 2026-04-09 02:51:48.135174 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:48.135178 | orchestrator | 2026-04-09 02:51:48.135186 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-09 02:51:48.135189 | orchestrator | Thursday 09 April 2026 02:51:46 +0000 (0:00:00.148) 0:00:23.183 ******** 2026-04-09 02:51:48.135193 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:48.135197 | orchestrator | 2026-04-09 02:51:48.135201 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-09 02:51:48.135204 | orchestrator | Thursday 09 April 2026 02:51:46 +0000 (0:00:00.156) 0:00:23.339 ******** 2026-04-09 02:51:48.135208 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:48.135212 | orchestrator | 2026-04-09 02:51:48.135215 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-09 02:51:48.135221 | orchestrator | Thursday 09 April 2026 02:51:47 +0000 (0:00:00.369) 0:00:23.708 ******** 2026-04-09 02:51:48.135228 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'})  2026-04-09 02:51:48.135238 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'})  2026-04-09 02:51:48.135245 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:48.135252 | orchestrator | 2026-04-09 02:51:48.135258 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-09 02:51:48.135264 | orchestrator | Thursday 09 April 2026 02:51:47 +0000 (0:00:00.157) 0:00:23.865 ******** 2026-04-09 02:51:48.135269 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'})  2026-04-09 02:51:48.135276 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'})  2026-04-09 02:51:48.135307 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:48.135313 | orchestrator | 2026-04-09 02:51:48.135318 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-09 02:51:48.135323 | orchestrator | Thursday 09 April 2026 02:51:47 +0000 (0:00:00.153) 0:00:24.018 ******** 2026-04-09 02:51:48.135329 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'})  2026-04-09 02:51:48.135334 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'})  2026-04-09 02:51:48.135341 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:48.135346 | orchestrator | 2026-04-09 02:51:48.135352 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-09 02:51:48.135358 | orchestrator | Thursday 09 April 2026 02:51:47 +0000 (0:00:00.164) 0:00:24.183 ******** 2026-04-09 02:51:48.135364 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'})  2026-04-09 02:51:48.135370 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'})  2026-04-09 02:51:48.135376 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:48.135382 | orchestrator | 2026-04-09 02:51:48.135388 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-09 02:51:48.135394 | orchestrator | Thursday 09 April 2026 02:51:47 +0000 (0:00:00.169) 0:00:24.352 ******** 2026-04-09 02:51:48.135406 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'})  2026-04-09 02:51:48.135412 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'})  2026-04-09 02:51:48.135418 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:48.135425 | orchestrator | 2026-04-09 02:51:48.135431 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-09 02:51:48.135437 | orchestrator | Thursday 09 April 2026 02:51:47 +0000 (0:00:00.199) 0:00:24.552 ******** 2026-04-09 02:51:48.135461 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'})  2026-04-09 02:51:53.873455 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'})  2026-04-09 02:51:53.873576 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:53.873603 | orchestrator | 2026-04-09 02:51:53.873624 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-09 02:51:53.873646 | orchestrator | Thursday 09 April 2026 02:51:48 +0000 (0:00:00.168) 0:00:24.721 ******** 2026-04-09 02:51:53.873665 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'})  2026-04-09 02:51:53.873685 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'})  2026-04-09 02:51:53.873704 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:53.873725 | orchestrator | 2026-04-09 02:51:53.873766 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-09 02:51:53.873821 | orchestrator | Thursday 09 April 2026 02:51:48 +0000 (0:00:00.168) 0:00:24.889 ******** 2026-04-09 02:51:53.873835 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'})  2026-04-09 02:51:53.873849 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'})  2026-04-09 02:51:53.873863 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:53.873875 | orchestrator | 2026-04-09 02:51:53.873888 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-09 02:51:53.873902 | orchestrator | Thursday 09 April 2026 02:51:48 +0000 (0:00:00.156) 0:00:25.045 ******** 2026-04-09 02:51:53.873915 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:51:53.873927 | orchestrator | 2026-04-09 02:51:53.873938 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-09 02:51:53.873949 | orchestrator | Thursday 09 April 2026 02:51:48 +0000 (0:00:00.541) 0:00:25.587 ******** 2026-04-09 02:51:53.873960 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:51:53.873971 | orchestrator | 2026-04-09 02:51:53.873982 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-09 02:51:53.873993 | orchestrator | Thursday 09 April 2026 02:51:49 +0000 (0:00:00.521) 0:00:26.108 ******** 2026-04-09 02:51:53.874004 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:51:53.874083 | orchestrator | 2026-04-09 02:51:53.874105 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-09 02:51:53.874118 | orchestrator | Thursday 09 April 2026 02:51:49 +0000 (0:00:00.149) 0:00:26.258 ******** 2026-04-09 02:51:53.874130 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'vg_name': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'}) 2026-04-09 02:51:53.874142 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'vg_name': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'}) 2026-04-09 02:51:53.874176 | orchestrator | 2026-04-09 02:51:53.874195 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-09 02:51:53.874207 | orchestrator | Thursday 09 April 2026 02:51:49 +0000 (0:00:00.193) 0:00:26.451 ******** 2026-04-09 02:51:53.874218 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'})  2026-04-09 02:51:53.874229 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'})  2026-04-09 02:51:53.874240 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:53.874251 | orchestrator | 2026-04-09 02:51:53.874262 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-09 02:51:53.874273 | orchestrator | Thursday 09 April 2026 02:51:50 +0000 (0:00:00.424) 0:00:26.876 ******** 2026-04-09 02:51:53.874307 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'})  2026-04-09 02:51:53.874318 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'})  2026-04-09 02:51:53.874329 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:53.874339 | orchestrator | 2026-04-09 02:51:53.874350 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-09 02:51:53.874361 | orchestrator | Thursday 09 April 2026 02:51:50 +0000 (0:00:00.167) 0:00:27.043 ******** 2026-04-09 02:51:53.874372 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'})  2026-04-09 02:51:53.874383 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'})  2026-04-09 02:51:53.874393 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:51:53.874404 | orchestrator | 2026-04-09 02:51:53.874415 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-09 02:51:53.874426 | orchestrator | Thursday 09 April 2026 02:51:50 +0000 (0:00:00.176) 0:00:27.220 ******** 2026-04-09 02:51:53.874457 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 02:51:53.874469 | orchestrator |  "lvm_report": { 2026-04-09 02:51:53.874480 | orchestrator |  "lv": [ 2026-04-09 02:51:53.874491 | orchestrator |  { 2026-04-09 02:51:53.874501 | orchestrator |  "lv_name": "osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141", 2026-04-09 02:51:53.874513 | orchestrator |  "vg_name": "ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141" 2026-04-09 02:51:53.874524 | orchestrator |  }, 2026-04-09 02:51:53.874535 | orchestrator |  { 2026-04-09 02:51:53.874545 | orchestrator |  "lv_name": "osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5", 2026-04-09 02:51:53.874556 | orchestrator |  "vg_name": "ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5" 2026-04-09 02:51:53.874567 | orchestrator |  } 2026-04-09 02:51:53.874578 | orchestrator |  ], 2026-04-09 02:51:53.874588 | orchestrator |  "pv": [ 2026-04-09 02:51:53.874599 | orchestrator |  { 2026-04-09 02:51:53.874610 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-09 02:51:53.874621 | orchestrator |  "vg_name": "ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5" 2026-04-09 02:51:53.874631 | orchestrator |  }, 2026-04-09 02:51:53.874642 | orchestrator |  { 2026-04-09 02:51:53.874659 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-09 02:51:53.874670 | orchestrator |  "vg_name": "ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141" 2026-04-09 02:51:53.874681 | orchestrator |  } 2026-04-09 02:51:53.874692 | orchestrator |  ] 2026-04-09 02:51:53.874703 | orchestrator |  } 2026-04-09 02:51:53.874713 | orchestrator | } 2026-04-09 02:51:53.874732 | orchestrator | 2026-04-09 02:51:53.874744 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-09 02:51:53.874754 | orchestrator | 2026-04-09 02:51:53.874765 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 02:51:53.874777 | orchestrator | Thursday 09 April 2026 02:51:50 +0000 (0:00:00.311) 0:00:27.532 ******** 2026-04-09 02:51:53.874788 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-09 02:51:53.874798 | orchestrator | 2026-04-09 02:51:53.874809 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-09 02:51:53.874820 | orchestrator | Thursday 09 April 2026 02:51:51 +0000 (0:00:00.314) 0:00:27.846 ******** 2026-04-09 02:51:53.874831 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:51:53.874841 | orchestrator | 2026-04-09 02:51:53.874852 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:51:53.874863 | orchestrator | Thursday 09 April 2026 02:51:51 +0000 (0:00:00.249) 0:00:28.095 ******** 2026-04-09 02:51:53.874873 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-09 02:51:53.874884 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-09 02:51:53.874895 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-09 02:51:53.874905 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-09 02:51:53.874916 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-09 02:51:53.874926 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-09 02:51:53.874937 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-09 02:51:53.874948 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-09 02:51:53.874958 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-09 02:51:53.874969 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-09 02:51:53.874980 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-09 02:51:53.874990 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-09 02:51:53.875001 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-09 02:51:53.875011 | orchestrator | 2026-04-09 02:51:53.875035 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:51:53.875046 | orchestrator | Thursday 09 April 2026 02:51:51 +0000 (0:00:00.467) 0:00:28.562 ******** 2026-04-09 02:51:53.875114 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:51:53.875167 | orchestrator | 2026-04-09 02:51:53.875178 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:51:53.875188 | orchestrator | Thursday 09 April 2026 02:51:52 +0000 (0:00:00.229) 0:00:28.792 ******** 2026-04-09 02:51:53.875199 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:51:53.875210 | orchestrator | 2026-04-09 02:51:53.875221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:51:53.875232 | orchestrator | Thursday 09 April 2026 02:51:52 +0000 (0:00:00.736) 0:00:29.529 ******** 2026-04-09 02:51:53.875243 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:51:53.875254 | orchestrator | 2026-04-09 02:51:53.875265 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:51:53.875311 | orchestrator | Thursday 09 April 2026 02:51:53 +0000 (0:00:00.242) 0:00:29.771 ******** 2026-04-09 02:51:53.875323 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:51:53.875334 | orchestrator | 2026-04-09 02:51:53.875345 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:51:53.875355 | orchestrator | Thursday 09 April 2026 02:51:53 +0000 (0:00:00.232) 0:00:30.004 ******** 2026-04-09 02:51:53.875375 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:51:53.875386 | orchestrator | 2026-04-09 02:51:53.875397 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:51:53.875418 | orchestrator | Thursday 09 April 2026 02:51:53 +0000 (0:00:00.231) 0:00:30.235 ******** 2026-04-09 02:51:53.875430 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:51:53.875440 | orchestrator | 2026-04-09 02:51:53.875460 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:52:05.891403 | orchestrator | Thursday 09 April 2026 02:51:53 +0000 (0:00:00.220) 0:00:30.455 ******** 2026-04-09 02:52:05.891546 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:05.891573 | orchestrator | 2026-04-09 02:52:05.891592 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:52:05.891613 | orchestrator | Thursday 09 April 2026 02:51:54 +0000 (0:00:00.234) 0:00:30.690 ******** 2026-04-09 02:52:05.891633 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:05.891653 | orchestrator | 2026-04-09 02:52:05.891673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:52:05.891693 | orchestrator | Thursday 09 April 2026 02:51:54 +0000 (0:00:00.222) 0:00:30.912 ******** 2026-04-09 02:52:05.891712 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be) 2026-04-09 02:52:05.891733 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be) 2026-04-09 02:52:05.891754 | orchestrator | 2026-04-09 02:52:05.891796 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:52:05.891817 | orchestrator | Thursday 09 April 2026 02:51:54 +0000 (0:00:00.459) 0:00:31.372 ******** 2026-04-09 02:52:05.891840 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74) 2026-04-09 02:52:05.891862 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74) 2026-04-09 02:52:05.891882 | orchestrator | 2026-04-09 02:52:05.891903 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:52:05.891924 | orchestrator | Thursday 09 April 2026 02:51:55 +0000 (0:00:00.482) 0:00:31.854 ******** 2026-04-09 02:52:05.891947 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf) 2026-04-09 02:52:05.891973 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf) 2026-04-09 02:52:05.891997 | orchestrator | 2026-04-09 02:52:05.892019 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:52:05.892040 | orchestrator | Thursday 09 April 2026 02:51:56 +0000 (0:00:00.829) 0:00:32.683 ******** 2026-04-09 02:52:05.892061 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105) 2026-04-09 02:52:05.892081 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105) 2026-04-09 02:52:05.892105 | orchestrator | 2026-04-09 02:52:05.892129 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:52:05.892151 | orchestrator | Thursday 09 April 2026 02:51:57 +0000 (0:00:01.051) 0:00:33.735 ******** 2026-04-09 02:52:05.892173 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-09 02:52:05.892196 | orchestrator | 2026-04-09 02:52:05.892218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:05.892240 | orchestrator | Thursday 09 April 2026 02:51:57 +0000 (0:00:00.397) 0:00:34.133 ******** 2026-04-09 02:52:05.892262 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-09 02:52:05.892332 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-09 02:52:05.892354 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-09 02:52:05.892425 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-09 02:52:05.892446 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-09 02:52:05.892466 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-09 02:52:05.892485 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-09 02:52:05.892506 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-09 02:52:05.892525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-09 02:52:05.892544 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-09 02:52:05.892563 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-09 02:52:05.892583 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-09 02:52:05.892603 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-09 02:52:05.892623 | orchestrator | 2026-04-09 02:52:05.892643 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:05.892663 | orchestrator | Thursday 09 April 2026 02:51:58 +0000 (0:00:00.495) 0:00:34.628 ******** 2026-04-09 02:52:05.892682 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:05.892703 | orchestrator | 2026-04-09 02:52:05.892723 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:05.892742 | orchestrator | Thursday 09 April 2026 02:51:58 +0000 (0:00:00.226) 0:00:34.855 ******** 2026-04-09 02:52:05.892760 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:05.892778 | orchestrator | 2026-04-09 02:52:05.892796 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:05.892813 | orchestrator | Thursday 09 April 2026 02:51:58 +0000 (0:00:00.237) 0:00:35.093 ******** 2026-04-09 02:52:05.892831 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:05.892848 | orchestrator | 2026-04-09 02:52:05.892895 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:05.892915 | orchestrator | Thursday 09 April 2026 02:51:58 +0000 (0:00:00.233) 0:00:35.326 ******** 2026-04-09 02:52:05.892932 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:05.892950 | orchestrator | 2026-04-09 02:52:05.892968 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:05.892986 | orchestrator | Thursday 09 April 2026 02:51:58 +0000 (0:00:00.231) 0:00:35.558 ******** 2026-04-09 02:52:05.893005 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:05.893022 | orchestrator | 2026-04-09 02:52:05.893040 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:05.893059 | orchestrator | Thursday 09 April 2026 02:51:59 +0000 (0:00:00.228) 0:00:35.786 ******** 2026-04-09 02:52:05.893080 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:05.893099 | orchestrator | 2026-04-09 02:52:05.893119 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:05.893138 | orchestrator | Thursday 09 April 2026 02:51:59 +0000 (0:00:00.212) 0:00:35.999 ******** 2026-04-09 02:52:05.893172 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:05.893191 | orchestrator | 2026-04-09 02:52:05.893208 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:05.893224 | orchestrator | Thursday 09 April 2026 02:51:59 +0000 (0:00:00.217) 0:00:36.217 ******** 2026-04-09 02:52:05.893240 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:05.893256 | orchestrator | 2026-04-09 02:52:05.893304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:05.893326 | orchestrator | Thursday 09 April 2026 02:52:00 +0000 (0:00:00.725) 0:00:36.942 ******** 2026-04-09 02:52:05.893343 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-09 02:52:05.893382 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-09 02:52:05.893401 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-09 02:52:05.893419 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-09 02:52:05.893439 | orchestrator | 2026-04-09 02:52:05.893458 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:05.893476 | orchestrator | Thursday 09 April 2026 02:52:01 +0000 (0:00:00.730) 0:00:37.673 ******** 2026-04-09 02:52:05.893490 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:05.893501 | orchestrator | 2026-04-09 02:52:05.893520 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:05.893536 | orchestrator | Thursday 09 April 2026 02:52:01 +0000 (0:00:00.240) 0:00:37.913 ******** 2026-04-09 02:52:05.893566 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:05.893583 | orchestrator | 2026-04-09 02:52:05.893600 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:05.893618 | orchestrator | Thursday 09 April 2026 02:52:01 +0000 (0:00:00.243) 0:00:38.157 ******** 2026-04-09 02:52:05.893635 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:05.893652 | orchestrator | 2026-04-09 02:52:05.893667 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:05.893681 | orchestrator | Thursday 09 April 2026 02:52:01 +0000 (0:00:00.219) 0:00:38.377 ******** 2026-04-09 02:52:05.893696 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:05.893710 | orchestrator | 2026-04-09 02:52:05.893726 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-09 02:52:05.893742 | orchestrator | Thursday 09 April 2026 02:52:02 +0000 (0:00:00.219) 0:00:38.597 ******** 2026-04-09 02:52:05.893758 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:05.893773 | orchestrator | 2026-04-09 02:52:05.893789 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-09 02:52:05.893804 | orchestrator | Thursday 09 April 2026 02:52:02 +0000 (0:00:00.153) 0:00:38.750 ******** 2026-04-09 02:52:05.893821 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '68e90870-4763-57e7-8e76-63c40a6d6d6f'}}) 2026-04-09 02:52:05.893838 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9961abb4-5e3b-57c6-b852-cf206941d3b6'}}) 2026-04-09 02:52:05.893853 | orchestrator | 2026-04-09 02:52:05.893869 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-09 02:52:05.893885 | orchestrator | Thursday 09 April 2026 02:52:02 +0000 (0:00:00.213) 0:00:38.963 ******** 2026-04-09 02:52:05.893903 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'}) 2026-04-09 02:52:05.893922 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'}) 2026-04-09 02:52:05.893935 | orchestrator | 2026-04-09 02:52:05.893944 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-09 02:52:05.893954 | orchestrator | Thursday 09 April 2026 02:52:04 +0000 (0:00:01.916) 0:00:40.880 ******** 2026-04-09 02:52:05.893963 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'})  2026-04-09 02:52:05.893974 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'})  2026-04-09 02:52:05.893985 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:05.893994 | orchestrator | 2026-04-09 02:52:05.894004 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-09 02:52:05.894013 | orchestrator | Thursday 09 April 2026 02:52:04 +0000 (0:00:00.161) 0:00:41.041 ******** 2026-04-09 02:52:05.894103 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'}) 2026-04-09 02:52:05.894152 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'}) 2026-04-09 02:52:12.312785 | orchestrator | 2026-04-09 02:52:12.312873 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-09 02:52:12.312883 | orchestrator | Thursday 09 April 2026 02:52:05 +0000 (0:00:01.431) 0:00:42.472 ******** 2026-04-09 02:52:12.312890 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'})  2026-04-09 02:52:12.312898 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'})  2026-04-09 02:52:12.312904 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:12.312911 | orchestrator | 2026-04-09 02:52:12.312929 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-09 02:52:12.312935 | orchestrator | Thursday 09 April 2026 02:52:06 +0000 (0:00:00.473) 0:00:42.946 ******** 2026-04-09 02:52:12.312941 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:12.312956 | orchestrator | 2026-04-09 02:52:12.312962 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-09 02:52:12.312968 | orchestrator | Thursday 09 April 2026 02:52:06 +0000 (0:00:00.162) 0:00:43.108 ******** 2026-04-09 02:52:12.312974 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'})  2026-04-09 02:52:12.312980 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'})  2026-04-09 02:52:12.312986 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:12.312992 | orchestrator | 2026-04-09 02:52:12.312997 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-09 02:52:12.313003 | orchestrator | Thursday 09 April 2026 02:52:06 +0000 (0:00:00.172) 0:00:43.281 ******** 2026-04-09 02:52:12.313009 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:12.313015 | orchestrator | 2026-04-09 02:52:12.313021 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-09 02:52:12.313027 | orchestrator | Thursday 09 April 2026 02:52:06 +0000 (0:00:00.131) 0:00:43.412 ******** 2026-04-09 02:52:12.313032 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'})  2026-04-09 02:52:12.313038 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'})  2026-04-09 02:52:12.313044 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:12.313051 | orchestrator | 2026-04-09 02:52:12.313056 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-09 02:52:12.313062 | orchestrator | Thursday 09 April 2026 02:52:07 +0000 (0:00:00.181) 0:00:43.594 ******** 2026-04-09 02:52:12.313068 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:12.313074 | orchestrator | 2026-04-09 02:52:12.313079 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-09 02:52:12.313085 | orchestrator | Thursday 09 April 2026 02:52:07 +0000 (0:00:00.166) 0:00:43.760 ******** 2026-04-09 02:52:12.313091 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'})  2026-04-09 02:52:12.313097 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'})  2026-04-09 02:52:12.313103 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:12.313108 | orchestrator | 2026-04-09 02:52:12.313114 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-09 02:52:12.313139 | orchestrator | Thursday 09 April 2026 02:52:07 +0000 (0:00:00.164) 0:00:43.924 ******** 2026-04-09 02:52:12.313145 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:52:12.313152 | orchestrator | 2026-04-09 02:52:12.313158 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-09 02:52:12.313164 | orchestrator | Thursday 09 April 2026 02:52:07 +0000 (0:00:00.159) 0:00:44.084 ******** 2026-04-09 02:52:12.313170 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'})  2026-04-09 02:52:12.313175 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'})  2026-04-09 02:52:12.313181 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:12.313187 | orchestrator | 2026-04-09 02:52:12.313193 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-09 02:52:12.313199 | orchestrator | Thursday 09 April 2026 02:52:07 +0000 (0:00:00.168) 0:00:44.252 ******** 2026-04-09 02:52:12.313204 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'})  2026-04-09 02:52:12.313210 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'})  2026-04-09 02:52:12.313216 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:12.313222 | orchestrator | 2026-04-09 02:52:12.313227 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-09 02:52:12.313246 | orchestrator | Thursday 09 April 2026 02:52:07 +0000 (0:00:00.167) 0:00:44.420 ******** 2026-04-09 02:52:12.313252 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'})  2026-04-09 02:52:12.313258 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'})  2026-04-09 02:52:12.313263 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:12.313269 | orchestrator | 2026-04-09 02:52:12.313340 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-09 02:52:12.313351 | orchestrator | Thursday 09 April 2026 02:52:07 +0000 (0:00:00.164) 0:00:44.584 ******** 2026-04-09 02:52:12.313366 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:12.313376 | orchestrator | 2026-04-09 02:52:12.313386 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-09 02:52:12.313409 | orchestrator | Thursday 09 April 2026 02:52:08 +0000 (0:00:00.392) 0:00:44.977 ******** 2026-04-09 02:52:12.313419 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:12.313429 | orchestrator | 2026-04-09 02:52:12.313448 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-09 02:52:12.313459 | orchestrator | Thursday 09 April 2026 02:52:08 +0000 (0:00:00.154) 0:00:45.132 ******** 2026-04-09 02:52:12.313466 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:12.313473 | orchestrator | 2026-04-09 02:52:12.313480 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-09 02:52:12.313487 | orchestrator | Thursday 09 April 2026 02:52:08 +0000 (0:00:00.152) 0:00:45.284 ******** 2026-04-09 02:52:12.313494 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 02:52:12.313501 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-09 02:52:12.313508 | orchestrator | } 2026-04-09 02:52:12.313515 | orchestrator | 2026-04-09 02:52:12.313521 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-09 02:52:12.313528 | orchestrator | Thursday 09 April 2026 02:52:08 +0000 (0:00:00.161) 0:00:45.445 ******** 2026-04-09 02:52:12.313535 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 02:52:12.313542 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-09 02:52:12.313556 | orchestrator | } 2026-04-09 02:52:12.313563 | orchestrator | 2026-04-09 02:52:12.313569 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-09 02:52:12.313576 | orchestrator | Thursday 09 April 2026 02:52:09 +0000 (0:00:00.157) 0:00:45.603 ******** 2026-04-09 02:52:12.313582 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 02:52:12.313589 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-09 02:52:12.313596 | orchestrator | } 2026-04-09 02:52:12.313603 | orchestrator | 2026-04-09 02:52:12.313610 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-09 02:52:12.313617 | orchestrator | Thursday 09 April 2026 02:52:09 +0000 (0:00:00.187) 0:00:45.790 ******** 2026-04-09 02:52:12.313623 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:52:12.313630 | orchestrator | 2026-04-09 02:52:12.313637 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-09 02:52:12.313643 | orchestrator | Thursday 09 April 2026 02:52:09 +0000 (0:00:00.554) 0:00:46.345 ******** 2026-04-09 02:52:12.313650 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:52:12.313657 | orchestrator | 2026-04-09 02:52:12.313663 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-09 02:52:12.313670 | orchestrator | Thursday 09 April 2026 02:52:10 +0000 (0:00:00.533) 0:00:46.879 ******** 2026-04-09 02:52:12.313677 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:52:12.313684 | orchestrator | 2026-04-09 02:52:12.313691 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-09 02:52:12.313698 | orchestrator | Thursday 09 April 2026 02:52:10 +0000 (0:00:00.553) 0:00:47.432 ******** 2026-04-09 02:52:12.313704 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:52:12.313710 | orchestrator | 2026-04-09 02:52:12.313716 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-09 02:52:12.313721 | orchestrator | Thursday 09 April 2026 02:52:11 +0000 (0:00:00.180) 0:00:47.613 ******** 2026-04-09 02:52:12.313727 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:12.313733 | orchestrator | 2026-04-09 02:52:12.313738 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-09 02:52:12.313744 | orchestrator | Thursday 09 April 2026 02:52:11 +0000 (0:00:00.131) 0:00:47.744 ******** 2026-04-09 02:52:12.313750 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:12.313756 | orchestrator | 2026-04-09 02:52:12.313762 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-09 02:52:12.313767 | orchestrator | Thursday 09 April 2026 02:52:11 +0000 (0:00:00.367) 0:00:48.112 ******** 2026-04-09 02:52:12.313773 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 02:52:12.313779 | orchestrator |  "vgs_report": { 2026-04-09 02:52:12.313785 | orchestrator |  "vg": [] 2026-04-09 02:52:12.313791 | orchestrator |  } 2026-04-09 02:52:12.313796 | orchestrator | } 2026-04-09 02:52:12.313802 | orchestrator | 2026-04-09 02:52:12.313808 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-09 02:52:12.313814 | orchestrator | Thursday 09 April 2026 02:52:11 +0000 (0:00:00.170) 0:00:48.283 ******** 2026-04-09 02:52:12.313819 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:12.313825 | orchestrator | 2026-04-09 02:52:12.313831 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-09 02:52:12.313836 | orchestrator | Thursday 09 April 2026 02:52:11 +0000 (0:00:00.143) 0:00:48.426 ******** 2026-04-09 02:52:12.313842 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:12.313848 | orchestrator | 2026-04-09 02:52:12.313853 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-09 02:52:12.313859 | orchestrator | Thursday 09 April 2026 02:52:11 +0000 (0:00:00.145) 0:00:48.571 ******** 2026-04-09 02:52:12.313865 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:12.313870 | orchestrator | 2026-04-09 02:52:12.313876 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-09 02:52:12.313882 | orchestrator | Thursday 09 April 2026 02:52:12 +0000 (0:00:00.160) 0:00:48.732 ******** 2026-04-09 02:52:12.313902 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:12.313908 | orchestrator | 2026-04-09 02:52:12.313920 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-09 02:52:17.586587 | orchestrator | Thursday 09 April 2026 02:52:12 +0000 (0:00:00.166) 0:00:48.898 ******** 2026-04-09 02:52:17.586702 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:17.586724 | orchestrator | 2026-04-09 02:52:17.586745 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-09 02:52:17.586765 | orchestrator | Thursday 09 April 2026 02:52:12 +0000 (0:00:00.165) 0:00:49.063 ******** 2026-04-09 02:52:17.586783 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:17.586804 | orchestrator | 2026-04-09 02:52:17.586822 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-09 02:52:17.586842 | orchestrator | Thursday 09 April 2026 02:52:12 +0000 (0:00:00.166) 0:00:49.230 ******** 2026-04-09 02:52:17.586862 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:17.586880 | orchestrator | 2026-04-09 02:52:17.586916 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-09 02:52:17.586929 | orchestrator | Thursday 09 April 2026 02:52:12 +0000 (0:00:00.147) 0:00:49.378 ******** 2026-04-09 02:52:17.586940 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:17.586951 | orchestrator | 2026-04-09 02:52:17.586962 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-09 02:52:17.586974 | orchestrator | Thursday 09 April 2026 02:52:12 +0000 (0:00:00.144) 0:00:49.522 ******** 2026-04-09 02:52:17.586993 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:17.587011 | orchestrator | 2026-04-09 02:52:17.587030 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-09 02:52:17.587049 | orchestrator | Thursday 09 April 2026 02:52:13 +0000 (0:00:00.146) 0:00:49.669 ******** 2026-04-09 02:52:17.587068 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:17.587088 | orchestrator | 2026-04-09 02:52:17.587107 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-09 02:52:17.587128 | orchestrator | Thursday 09 April 2026 02:52:13 +0000 (0:00:00.380) 0:00:50.050 ******** 2026-04-09 02:52:17.587173 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:17.587187 | orchestrator | 2026-04-09 02:52:17.587199 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-09 02:52:17.587212 | orchestrator | Thursday 09 April 2026 02:52:13 +0000 (0:00:00.145) 0:00:50.195 ******** 2026-04-09 02:52:17.587225 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:17.587238 | orchestrator | 2026-04-09 02:52:17.587251 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-09 02:52:17.587262 | orchestrator | Thursday 09 April 2026 02:52:13 +0000 (0:00:00.147) 0:00:50.343 ******** 2026-04-09 02:52:17.587303 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:17.587320 | orchestrator | 2026-04-09 02:52:17.587331 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-09 02:52:17.587342 | orchestrator | Thursday 09 April 2026 02:52:13 +0000 (0:00:00.157) 0:00:50.500 ******** 2026-04-09 02:52:17.587353 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:17.587367 | orchestrator | 2026-04-09 02:52:17.587386 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-09 02:52:17.587404 | orchestrator | Thursday 09 April 2026 02:52:14 +0000 (0:00:00.161) 0:00:50.662 ******** 2026-04-09 02:52:17.587437 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'})  2026-04-09 02:52:17.587458 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'})  2026-04-09 02:52:17.587492 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:17.587526 | orchestrator | 2026-04-09 02:52:17.587545 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-09 02:52:17.587594 | orchestrator | Thursday 09 April 2026 02:52:14 +0000 (0:00:00.165) 0:00:50.827 ******** 2026-04-09 02:52:17.587614 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'})  2026-04-09 02:52:17.587632 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'})  2026-04-09 02:52:17.587650 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:17.587669 | orchestrator | 2026-04-09 02:52:17.587687 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-09 02:52:17.587704 | orchestrator | Thursday 09 April 2026 02:52:14 +0000 (0:00:00.173) 0:00:51.001 ******** 2026-04-09 02:52:17.587719 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'})  2026-04-09 02:52:17.587737 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'})  2026-04-09 02:52:17.587756 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:17.587775 | orchestrator | 2026-04-09 02:52:17.587794 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-09 02:52:17.587813 | orchestrator | Thursday 09 April 2026 02:52:14 +0000 (0:00:00.175) 0:00:51.176 ******** 2026-04-09 02:52:17.587832 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'})  2026-04-09 02:52:17.587846 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'})  2026-04-09 02:52:17.587856 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:17.587867 | orchestrator | 2026-04-09 02:52:17.587901 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-09 02:52:17.587912 | orchestrator | Thursday 09 April 2026 02:52:14 +0000 (0:00:00.166) 0:00:51.342 ******** 2026-04-09 02:52:17.587923 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'})  2026-04-09 02:52:17.587933 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'})  2026-04-09 02:52:17.587944 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:17.587955 | orchestrator | 2026-04-09 02:52:17.587976 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-09 02:52:17.587987 | orchestrator | Thursday 09 April 2026 02:52:14 +0000 (0:00:00.182) 0:00:51.525 ******** 2026-04-09 02:52:17.587997 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'})  2026-04-09 02:52:17.588008 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'})  2026-04-09 02:52:17.588019 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:17.588030 | orchestrator | 2026-04-09 02:52:17.588040 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-09 02:52:17.588051 | orchestrator | Thursday 09 April 2026 02:52:15 +0000 (0:00:00.192) 0:00:51.717 ******** 2026-04-09 02:52:17.588062 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'})  2026-04-09 02:52:17.588072 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'})  2026-04-09 02:52:17.588083 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:17.588105 | orchestrator | 2026-04-09 02:52:17.588116 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-09 02:52:17.588127 | orchestrator | Thursday 09 April 2026 02:52:15 +0000 (0:00:00.431) 0:00:52.149 ******** 2026-04-09 02:52:17.588138 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'})  2026-04-09 02:52:17.588149 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'})  2026-04-09 02:52:17.588159 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:17.588170 | orchestrator | 2026-04-09 02:52:17.588181 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-09 02:52:17.588192 | orchestrator | Thursday 09 April 2026 02:52:15 +0000 (0:00:00.166) 0:00:52.315 ******** 2026-04-09 02:52:17.588203 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:52:17.588214 | orchestrator | 2026-04-09 02:52:17.588224 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-09 02:52:17.588235 | orchestrator | Thursday 09 April 2026 02:52:16 +0000 (0:00:00.574) 0:00:52.890 ******** 2026-04-09 02:52:17.588246 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:52:17.588257 | orchestrator | 2026-04-09 02:52:17.588267 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-09 02:52:17.588311 | orchestrator | Thursday 09 April 2026 02:52:16 +0000 (0:00:00.539) 0:00:53.430 ******** 2026-04-09 02:52:17.588323 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:52:17.588333 | orchestrator | 2026-04-09 02:52:17.588344 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-09 02:52:17.588355 | orchestrator | Thursday 09 April 2026 02:52:16 +0000 (0:00:00.155) 0:00:53.586 ******** 2026-04-09 02:52:17.588366 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'vg_name': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'}) 2026-04-09 02:52:17.588378 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'vg_name': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'}) 2026-04-09 02:52:17.588389 | orchestrator | 2026-04-09 02:52:17.588400 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-09 02:52:17.588410 | orchestrator | Thursday 09 April 2026 02:52:17 +0000 (0:00:00.200) 0:00:53.786 ******** 2026-04-09 02:52:17.588421 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'})  2026-04-09 02:52:17.588432 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'})  2026-04-09 02:52:17.588443 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:17.588453 | orchestrator | 2026-04-09 02:52:17.588464 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-09 02:52:17.588475 | orchestrator | Thursday 09 April 2026 02:52:17 +0000 (0:00:00.192) 0:00:53.978 ******** 2026-04-09 02:52:17.588486 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'})  2026-04-09 02:52:17.588505 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'})  2026-04-09 02:52:24.715411 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:24.715550 | orchestrator | 2026-04-09 02:52:24.715580 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-09 02:52:24.715601 | orchestrator | Thursday 09 April 2026 02:52:17 +0000 (0:00:00.193) 0:00:54.171 ******** 2026-04-09 02:52:24.715621 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'})  2026-04-09 02:52:24.715692 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'})  2026-04-09 02:52:24.715714 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:52:24.715733 | orchestrator | 2026-04-09 02:52:24.715753 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-09 02:52:24.715771 | orchestrator | Thursday 09 April 2026 02:52:17 +0000 (0:00:00.181) 0:00:54.353 ******** 2026-04-09 02:52:24.715790 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 02:52:24.715809 | orchestrator |  "lvm_report": { 2026-04-09 02:52:24.715828 | orchestrator |  "lv": [ 2026-04-09 02:52:24.715845 | orchestrator |  { 2026-04-09 02:52:24.715865 | orchestrator |  "lv_name": "osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f", 2026-04-09 02:52:24.715883 | orchestrator |  "vg_name": "ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f" 2026-04-09 02:52:24.715901 | orchestrator |  }, 2026-04-09 02:52:24.715921 | orchestrator |  { 2026-04-09 02:52:24.715940 | orchestrator |  "lv_name": "osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6", 2026-04-09 02:52:24.715958 | orchestrator |  "vg_name": "ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6" 2026-04-09 02:52:24.715976 | orchestrator |  } 2026-04-09 02:52:24.715996 | orchestrator |  ], 2026-04-09 02:52:24.716017 | orchestrator |  "pv": [ 2026-04-09 02:52:24.716036 | orchestrator |  { 2026-04-09 02:52:24.716111 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-09 02:52:24.716132 | orchestrator |  "vg_name": "ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f" 2026-04-09 02:52:24.716152 | orchestrator |  }, 2026-04-09 02:52:24.716170 | orchestrator |  { 2026-04-09 02:52:24.716189 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-09 02:52:24.716207 | orchestrator |  "vg_name": "ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6" 2026-04-09 02:52:24.716225 | orchestrator |  } 2026-04-09 02:52:24.716242 | orchestrator |  ] 2026-04-09 02:52:24.716261 | orchestrator |  } 2026-04-09 02:52:24.716306 | orchestrator | } 2026-04-09 02:52:24.716326 | orchestrator | 2026-04-09 02:52:24.716345 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-09 02:52:24.716363 | orchestrator | 2026-04-09 02:52:24.716381 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 02:52:24.716393 | orchestrator | Thursday 09 April 2026 02:52:18 +0000 (0:00:00.325) 0:00:54.678 ******** 2026-04-09 02:52:24.716403 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-09 02:52:24.716414 | orchestrator | 2026-04-09 02:52:24.716425 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-09 02:52:24.716436 | orchestrator | Thursday 09 April 2026 02:52:18 +0000 (0:00:00.780) 0:00:55.459 ******** 2026-04-09 02:52:24.716447 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:52:24.716458 | orchestrator | 2026-04-09 02:52:24.716469 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:52:24.716480 | orchestrator | Thursday 09 April 2026 02:52:19 +0000 (0:00:00.251) 0:00:55.710 ******** 2026-04-09 02:52:24.716498 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-09 02:52:24.716517 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-09 02:52:24.716534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-09 02:52:24.716554 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-09 02:52:24.716573 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-09 02:52:24.716592 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-09 02:52:24.716610 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-09 02:52:24.716640 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-09 02:52:24.716651 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-09 02:52:24.716661 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-09 02:52:24.716672 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-09 02:52:24.716683 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-09 02:52:24.716699 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-09 02:52:24.716716 | orchestrator | 2026-04-09 02:52:24.716732 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:52:24.716763 | orchestrator | Thursday 09 April 2026 02:52:19 +0000 (0:00:00.469) 0:00:56.180 ******** 2026-04-09 02:52:24.716781 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:24.716799 | orchestrator | 2026-04-09 02:52:24.716816 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:52:24.716833 | orchestrator | Thursday 09 April 2026 02:52:19 +0000 (0:00:00.212) 0:00:56.393 ******** 2026-04-09 02:52:24.716850 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:24.716868 | orchestrator | 2026-04-09 02:52:24.716885 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:52:24.716927 | orchestrator | Thursday 09 April 2026 02:52:20 +0000 (0:00:00.238) 0:00:56.631 ******** 2026-04-09 02:52:24.716947 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:24.716973 | orchestrator | 2026-04-09 02:52:24.716996 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:52:24.717013 | orchestrator | Thursday 09 April 2026 02:52:20 +0000 (0:00:00.220) 0:00:56.852 ******** 2026-04-09 02:52:24.717031 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:24.717048 | orchestrator | 2026-04-09 02:52:24.717066 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:52:24.717085 | orchestrator | Thursday 09 April 2026 02:52:20 +0000 (0:00:00.226) 0:00:57.078 ******** 2026-04-09 02:52:24.717104 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:24.717122 | orchestrator | 2026-04-09 02:52:24.717140 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:52:24.717159 | orchestrator | Thursday 09 April 2026 02:52:20 +0000 (0:00:00.219) 0:00:57.298 ******** 2026-04-09 02:52:24.717177 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:24.717193 | orchestrator | 2026-04-09 02:52:24.717205 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:52:24.717215 | orchestrator | Thursday 09 April 2026 02:52:20 +0000 (0:00:00.205) 0:00:57.504 ******** 2026-04-09 02:52:24.717269 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:24.717373 | orchestrator | 2026-04-09 02:52:24.717391 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:52:24.717411 | orchestrator | Thursday 09 April 2026 02:52:21 +0000 (0:00:00.266) 0:00:57.770 ******** 2026-04-09 02:52:24.717430 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:24.717448 | orchestrator | 2026-04-09 02:52:24.717464 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:52:24.717474 | orchestrator | Thursday 09 April 2026 02:52:21 +0000 (0:00:00.725) 0:00:58.496 ******** 2026-04-09 02:52:24.717485 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965) 2026-04-09 02:52:24.717498 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965) 2026-04-09 02:52:24.717509 | orchestrator | 2026-04-09 02:52:24.717519 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:52:24.717530 | orchestrator | Thursday 09 April 2026 02:52:22 +0000 (0:00:00.498) 0:00:58.994 ******** 2026-04-09 02:52:24.717638 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e) 2026-04-09 02:52:24.717676 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e) 2026-04-09 02:52:24.717687 | orchestrator | 2026-04-09 02:52:24.717698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:52:24.717709 | orchestrator | Thursday 09 April 2026 02:52:22 +0000 (0:00:00.516) 0:00:59.511 ******** 2026-04-09 02:52:24.717720 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4) 2026-04-09 02:52:24.717731 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4) 2026-04-09 02:52:24.717742 | orchestrator | 2026-04-09 02:52:24.717752 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:52:24.717763 | orchestrator | Thursday 09 April 2026 02:52:23 +0000 (0:00:00.483) 0:00:59.995 ******** 2026-04-09 02:52:24.717774 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d) 2026-04-09 02:52:24.717785 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d) 2026-04-09 02:52:24.717796 | orchestrator | 2026-04-09 02:52:24.717807 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 02:52:24.717818 | orchestrator | Thursday 09 April 2026 02:52:23 +0000 (0:00:00.467) 0:01:00.462 ******** 2026-04-09 02:52:24.717829 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-09 02:52:24.717839 | orchestrator | 2026-04-09 02:52:24.717850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:24.717861 | orchestrator | Thursday 09 April 2026 02:52:24 +0000 (0:00:00.370) 0:01:00.832 ******** 2026-04-09 02:52:24.717871 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-09 02:52:24.717882 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-09 02:52:24.717893 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-09 02:52:24.717904 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-09 02:52:24.717914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-09 02:52:24.717925 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-09 02:52:24.717936 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-09 02:52:24.717946 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-09 02:52:24.717957 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-09 02:52:24.717968 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-09 02:52:24.717978 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-09 02:52:24.718003 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-09 02:52:34.387782 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-09 02:52:34.387904 | orchestrator | 2026-04-09 02:52:34.387915 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:34.387923 | orchestrator | Thursday 09 April 2026 02:52:24 +0000 (0:00:00.458) 0:01:01.291 ******** 2026-04-09 02:52:34.387930 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:34.387938 | orchestrator | 2026-04-09 02:52:34.387944 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:34.387968 | orchestrator | Thursday 09 April 2026 02:52:24 +0000 (0:00:00.207) 0:01:01.499 ******** 2026-04-09 02:52:34.387974 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:34.388006 | orchestrator | 2026-04-09 02:52:34.388012 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:34.388018 | orchestrator | Thursday 09 April 2026 02:52:25 +0000 (0:00:00.237) 0:01:01.736 ******** 2026-04-09 02:52:34.388025 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:34.388031 | orchestrator | 2026-04-09 02:52:34.388037 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:34.388043 | orchestrator | Thursday 09 April 2026 02:52:25 +0000 (0:00:00.238) 0:01:01.974 ******** 2026-04-09 02:52:34.388049 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:34.388055 | orchestrator | 2026-04-09 02:52:34.388061 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:34.388067 | orchestrator | Thursday 09 April 2026 02:52:25 +0000 (0:00:00.230) 0:01:02.205 ******** 2026-04-09 02:52:34.388073 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:34.388079 | orchestrator | 2026-04-09 02:52:34.388085 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:34.388091 | orchestrator | Thursday 09 April 2026 02:52:26 +0000 (0:00:00.732) 0:01:02.937 ******** 2026-04-09 02:52:34.388098 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:34.388104 | orchestrator | 2026-04-09 02:52:34.388110 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:34.388116 | orchestrator | Thursday 09 April 2026 02:52:26 +0000 (0:00:00.245) 0:01:03.183 ******** 2026-04-09 02:52:34.388122 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:34.388128 | orchestrator | 2026-04-09 02:52:34.388134 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:34.388141 | orchestrator | Thursday 09 April 2026 02:52:26 +0000 (0:00:00.235) 0:01:03.418 ******** 2026-04-09 02:52:34.388147 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:34.388153 | orchestrator | 2026-04-09 02:52:34.388159 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:34.388165 | orchestrator | Thursday 09 April 2026 02:52:27 +0000 (0:00:00.217) 0:01:03.636 ******** 2026-04-09 02:52:34.388172 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-09 02:52:34.388179 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-09 02:52:34.388186 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-09 02:52:34.388192 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-09 02:52:34.388198 | orchestrator | 2026-04-09 02:52:34.388204 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:34.388211 | orchestrator | Thursday 09 April 2026 02:52:27 +0000 (0:00:00.738) 0:01:04.375 ******** 2026-04-09 02:52:34.388217 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:34.388223 | orchestrator | 2026-04-09 02:52:34.388229 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:34.388235 | orchestrator | Thursday 09 April 2026 02:52:28 +0000 (0:00:00.225) 0:01:04.601 ******** 2026-04-09 02:52:34.388241 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:34.388247 | orchestrator | 2026-04-09 02:52:34.388253 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:34.388259 | orchestrator | Thursday 09 April 2026 02:52:28 +0000 (0:00:00.241) 0:01:04.842 ******** 2026-04-09 02:52:34.388265 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:34.388294 | orchestrator | 2026-04-09 02:52:34.388301 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 02:52:34.388309 | orchestrator | Thursday 09 April 2026 02:52:28 +0000 (0:00:00.218) 0:01:05.060 ******** 2026-04-09 02:52:34.388316 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:34.388324 | orchestrator | 2026-04-09 02:52:34.388331 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-09 02:52:34.388338 | orchestrator | Thursday 09 April 2026 02:52:28 +0000 (0:00:00.220) 0:01:05.280 ******** 2026-04-09 02:52:34.388345 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:34.388352 | orchestrator | 2026-04-09 02:52:34.388365 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-09 02:52:34.388372 | orchestrator | Thursday 09 April 2026 02:52:28 +0000 (0:00:00.192) 0:01:05.473 ******** 2026-04-09 02:52:34.388380 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '27c4b53f-c2bf-5253-84b2-9319684e0f9e'}}) 2026-04-09 02:52:34.388388 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'}}) 2026-04-09 02:52:34.388395 | orchestrator | 2026-04-09 02:52:34.388403 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-09 02:52:34.388410 | orchestrator | Thursday 09 April 2026 02:52:29 +0000 (0:00:00.209) 0:01:05.683 ******** 2026-04-09 02:52:34.388419 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'}) 2026-04-09 02:52:34.388428 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'}) 2026-04-09 02:52:34.388434 | orchestrator | 2026-04-09 02:52:34.388442 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-09 02:52:34.388465 | orchestrator | Thursday 09 April 2026 02:52:31 +0000 (0:00:01.915) 0:01:07.599 ******** 2026-04-09 02:52:34.388473 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'})  2026-04-09 02:52:34.388481 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'})  2026-04-09 02:52:34.388488 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:34.388495 | orchestrator | 2026-04-09 02:52:34.388507 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-09 02:52:34.388515 | orchestrator | Thursday 09 April 2026 02:52:31 +0000 (0:00:00.407) 0:01:08.006 ******** 2026-04-09 02:52:34.388522 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'}) 2026-04-09 02:52:34.388529 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'}) 2026-04-09 02:52:34.388536 | orchestrator | 2026-04-09 02:52:34.388543 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-09 02:52:34.388551 | orchestrator | Thursday 09 April 2026 02:52:32 +0000 (0:00:01.366) 0:01:09.373 ******** 2026-04-09 02:52:34.388558 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'})  2026-04-09 02:52:34.388569 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'})  2026-04-09 02:52:34.388579 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:34.388589 | orchestrator | 2026-04-09 02:52:34.388599 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-09 02:52:34.388609 | orchestrator | Thursday 09 April 2026 02:52:33 +0000 (0:00:00.221) 0:01:09.594 ******** 2026-04-09 02:52:34.388619 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:34.388629 | orchestrator | 2026-04-09 02:52:34.388639 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-09 02:52:34.388650 | orchestrator | Thursday 09 April 2026 02:52:33 +0000 (0:00:00.172) 0:01:09.766 ******** 2026-04-09 02:52:34.388661 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'})  2026-04-09 02:52:34.388671 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'})  2026-04-09 02:52:34.388689 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:34.388700 | orchestrator | 2026-04-09 02:52:34.388710 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-09 02:52:34.388721 | orchestrator | Thursday 09 April 2026 02:52:33 +0000 (0:00:00.177) 0:01:09.944 ******** 2026-04-09 02:52:34.388731 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:34.388741 | orchestrator | 2026-04-09 02:52:34.388752 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-09 02:52:34.388762 | orchestrator | Thursday 09 April 2026 02:52:33 +0000 (0:00:00.153) 0:01:10.098 ******** 2026-04-09 02:52:34.388773 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'})  2026-04-09 02:52:34.388779 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'})  2026-04-09 02:52:34.388786 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:34.388792 | orchestrator | 2026-04-09 02:52:34.388798 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-09 02:52:34.388804 | orchestrator | Thursday 09 April 2026 02:52:33 +0000 (0:00:00.166) 0:01:10.265 ******** 2026-04-09 02:52:34.388810 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:34.388816 | orchestrator | 2026-04-09 02:52:34.388822 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-09 02:52:34.388829 | orchestrator | Thursday 09 April 2026 02:52:33 +0000 (0:00:00.164) 0:01:10.429 ******** 2026-04-09 02:52:34.388835 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'})  2026-04-09 02:52:34.388841 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'})  2026-04-09 02:52:34.388847 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:34.388853 | orchestrator | 2026-04-09 02:52:34.388860 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-09 02:52:34.388866 | orchestrator | Thursday 09 April 2026 02:52:34 +0000 (0:00:00.188) 0:01:10.617 ******** 2026-04-09 02:52:34.388872 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:52:34.388879 | orchestrator | 2026-04-09 02:52:34.388885 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-09 02:52:34.388891 | orchestrator | Thursday 09 April 2026 02:52:34 +0000 (0:00:00.199) 0:01:10.817 ******** 2026-04-09 02:52:34.388903 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'})  2026-04-09 02:52:41.366119 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'})  2026-04-09 02:52:41.366210 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:41.366220 | orchestrator | 2026-04-09 02:52:41.366228 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-09 02:52:41.366235 | orchestrator | Thursday 09 April 2026 02:52:34 +0000 (0:00:00.156) 0:01:10.974 ******** 2026-04-09 02:52:41.366253 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'})  2026-04-09 02:52:41.366260 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'})  2026-04-09 02:52:41.366266 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:41.366325 | orchestrator | 2026-04-09 02:52:41.366336 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-09 02:52:41.366345 | orchestrator | Thursday 09 April 2026 02:52:34 +0000 (0:00:00.157) 0:01:11.131 ******** 2026-04-09 02:52:41.366374 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'})  2026-04-09 02:52:41.366381 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'})  2026-04-09 02:52:41.366387 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:41.366392 | orchestrator | 2026-04-09 02:52:41.366398 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-09 02:52:41.366404 | orchestrator | Thursday 09 April 2026 02:52:34 +0000 (0:00:00.397) 0:01:11.528 ******** 2026-04-09 02:52:41.366410 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:41.366416 | orchestrator | 2026-04-09 02:52:41.366421 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-09 02:52:41.366427 | orchestrator | Thursday 09 April 2026 02:52:35 +0000 (0:00:00.156) 0:01:11.685 ******** 2026-04-09 02:52:41.366433 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:41.366439 | orchestrator | 2026-04-09 02:52:41.366445 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-09 02:52:41.366451 | orchestrator | Thursday 09 April 2026 02:52:35 +0000 (0:00:00.176) 0:01:11.862 ******** 2026-04-09 02:52:41.366456 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:41.366462 | orchestrator | 2026-04-09 02:52:41.366468 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-09 02:52:41.366473 | orchestrator | Thursday 09 April 2026 02:52:35 +0000 (0:00:00.158) 0:01:12.020 ******** 2026-04-09 02:52:41.366479 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 02:52:41.366486 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-09 02:52:41.366492 | orchestrator | } 2026-04-09 02:52:41.366499 | orchestrator | 2026-04-09 02:52:41.366504 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-09 02:52:41.366510 | orchestrator | Thursday 09 April 2026 02:52:35 +0000 (0:00:00.156) 0:01:12.177 ******** 2026-04-09 02:52:41.366516 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 02:52:41.366522 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-09 02:52:41.366527 | orchestrator | } 2026-04-09 02:52:41.366533 | orchestrator | 2026-04-09 02:52:41.366539 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-09 02:52:41.366545 | orchestrator | Thursday 09 April 2026 02:52:35 +0000 (0:00:00.156) 0:01:12.334 ******** 2026-04-09 02:52:41.366550 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 02:52:41.366556 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-09 02:52:41.366562 | orchestrator | } 2026-04-09 02:52:41.366568 | orchestrator | 2026-04-09 02:52:41.366573 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-09 02:52:41.366579 | orchestrator | Thursday 09 April 2026 02:52:35 +0000 (0:00:00.154) 0:01:12.488 ******** 2026-04-09 02:52:41.366585 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:52:41.366591 | orchestrator | 2026-04-09 02:52:41.366597 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-09 02:52:41.366603 | orchestrator | Thursday 09 April 2026 02:52:36 +0000 (0:00:00.564) 0:01:13.052 ******** 2026-04-09 02:52:41.366608 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:52:41.366614 | orchestrator | 2026-04-09 02:52:41.366620 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-09 02:52:41.366626 | orchestrator | Thursday 09 April 2026 02:52:37 +0000 (0:00:00.589) 0:01:13.642 ******** 2026-04-09 02:52:41.366631 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:52:41.366638 | orchestrator | 2026-04-09 02:52:41.366645 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-09 02:52:41.366652 | orchestrator | Thursday 09 April 2026 02:52:37 +0000 (0:00:00.553) 0:01:14.196 ******** 2026-04-09 02:52:41.366658 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:52:41.366665 | orchestrator | 2026-04-09 02:52:41.366672 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-09 02:52:41.366683 | orchestrator | Thursday 09 April 2026 02:52:37 +0000 (0:00:00.170) 0:01:14.366 ******** 2026-04-09 02:52:41.366690 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:41.366698 | orchestrator | 2026-04-09 02:52:41.366704 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-09 02:52:41.366711 | orchestrator | Thursday 09 April 2026 02:52:37 +0000 (0:00:00.118) 0:01:14.485 ******** 2026-04-09 02:52:41.366717 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:41.366724 | orchestrator | 2026-04-09 02:52:41.366731 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-09 02:52:41.366737 | orchestrator | Thursday 09 April 2026 02:52:38 +0000 (0:00:00.372) 0:01:14.857 ******** 2026-04-09 02:52:41.366745 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 02:52:41.366752 | orchestrator |  "vgs_report": { 2026-04-09 02:52:41.366759 | orchestrator |  "vg": [] 2026-04-09 02:52:41.366778 | orchestrator |  } 2026-04-09 02:52:41.366785 | orchestrator | } 2026-04-09 02:52:41.366793 | orchestrator | 2026-04-09 02:52:41.366799 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-09 02:52:41.366806 | orchestrator | Thursday 09 April 2026 02:52:38 +0000 (0:00:00.154) 0:01:15.012 ******** 2026-04-09 02:52:41.366812 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:41.366818 | orchestrator | 2026-04-09 02:52:41.366823 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-09 02:52:41.366834 | orchestrator | Thursday 09 April 2026 02:52:38 +0000 (0:00:00.149) 0:01:15.162 ******** 2026-04-09 02:52:41.366840 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:41.366845 | orchestrator | 2026-04-09 02:52:41.366851 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-09 02:52:41.366857 | orchestrator | Thursday 09 April 2026 02:52:38 +0000 (0:00:00.167) 0:01:15.330 ******** 2026-04-09 02:52:41.366863 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:41.366868 | orchestrator | 2026-04-09 02:52:41.366874 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-09 02:52:41.366880 | orchestrator | Thursday 09 April 2026 02:52:38 +0000 (0:00:00.154) 0:01:15.484 ******** 2026-04-09 02:52:41.366885 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:41.366891 | orchestrator | 2026-04-09 02:52:41.366897 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-09 02:52:41.366903 | orchestrator | Thursday 09 April 2026 02:52:39 +0000 (0:00:00.179) 0:01:15.664 ******** 2026-04-09 02:52:41.366908 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:41.366914 | orchestrator | 2026-04-09 02:52:41.366920 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-09 02:52:41.366925 | orchestrator | Thursday 09 April 2026 02:52:39 +0000 (0:00:00.157) 0:01:15.821 ******** 2026-04-09 02:52:41.366931 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:41.366937 | orchestrator | 2026-04-09 02:52:41.366943 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-09 02:52:41.366948 | orchestrator | Thursday 09 April 2026 02:52:39 +0000 (0:00:00.142) 0:01:15.964 ******** 2026-04-09 02:52:41.366954 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:41.366960 | orchestrator | 2026-04-09 02:52:41.366965 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-09 02:52:41.366971 | orchestrator | Thursday 09 April 2026 02:52:39 +0000 (0:00:00.142) 0:01:16.106 ******** 2026-04-09 02:52:41.366977 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:41.366983 | orchestrator | 2026-04-09 02:52:41.366989 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-09 02:52:41.366994 | orchestrator | Thursday 09 April 2026 02:52:39 +0000 (0:00:00.139) 0:01:16.245 ******** 2026-04-09 02:52:41.367000 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:41.367006 | orchestrator | 2026-04-09 02:52:41.367012 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-09 02:52:41.367017 | orchestrator | Thursday 09 April 2026 02:52:39 +0000 (0:00:00.156) 0:01:16.402 ******** 2026-04-09 02:52:41.367027 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:41.367033 | orchestrator | 2026-04-09 02:52:41.367039 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-09 02:52:41.367045 | orchestrator | Thursday 09 April 2026 02:52:39 +0000 (0:00:00.160) 0:01:16.563 ******** 2026-04-09 02:52:41.367050 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:41.367056 | orchestrator | 2026-04-09 02:52:41.367062 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-09 02:52:41.367067 | orchestrator | Thursday 09 April 2026 02:52:40 +0000 (0:00:00.412) 0:01:16.976 ******** 2026-04-09 02:52:41.367073 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:41.367079 | orchestrator | 2026-04-09 02:52:41.367085 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-09 02:52:41.367090 | orchestrator | Thursday 09 April 2026 02:52:40 +0000 (0:00:00.140) 0:01:17.116 ******** 2026-04-09 02:52:41.367096 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:41.367102 | orchestrator | 2026-04-09 02:52:41.367107 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-09 02:52:41.367113 | orchestrator | Thursday 09 April 2026 02:52:40 +0000 (0:00:00.154) 0:01:17.271 ******** 2026-04-09 02:52:41.367119 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:41.367124 | orchestrator | 2026-04-09 02:52:41.367130 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-09 02:52:41.367136 | orchestrator | Thursday 09 April 2026 02:52:40 +0000 (0:00:00.176) 0:01:17.447 ******** 2026-04-09 02:52:41.367142 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'})  2026-04-09 02:52:41.367147 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'})  2026-04-09 02:52:41.367153 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:41.367159 | orchestrator | 2026-04-09 02:52:41.367164 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-09 02:52:41.367170 | orchestrator | Thursday 09 April 2026 02:52:41 +0000 (0:00:00.164) 0:01:17.612 ******** 2026-04-09 02:52:41.367176 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'})  2026-04-09 02:52:41.367182 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'})  2026-04-09 02:52:41.367187 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:41.367193 | orchestrator | 2026-04-09 02:52:41.367199 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-09 02:52:41.367205 | orchestrator | Thursday 09 April 2026 02:52:41 +0000 (0:00:00.169) 0:01:17.781 ******** 2026-04-09 02:52:41.367215 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'})  2026-04-09 02:52:44.727060 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'})  2026-04-09 02:52:44.727150 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:44.727162 | orchestrator | 2026-04-09 02:52:44.727186 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-09 02:52:44.727194 | orchestrator | Thursday 09 April 2026 02:52:41 +0000 (0:00:00.170) 0:01:17.952 ******** 2026-04-09 02:52:44.727202 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'})  2026-04-09 02:52:44.727209 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'})  2026-04-09 02:52:44.727235 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:44.727243 | orchestrator | 2026-04-09 02:52:44.727250 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-09 02:52:44.727257 | orchestrator | Thursday 09 April 2026 02:52:41 +0000 (0:00:00.178) 0:01:18.131 ******** 2026-04-09 02:52:44.727263 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'})  2026-04-09 02:52:44.727344 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'})  2026-04-09 02:52:44.727352 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:44.727359 | orchestrator | 2026-04-09 02:52:44.727366 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-09 02:52:44.727373 | orchestrator | Thursday 09 April 2026 02:52:41 +0000 (0:00:00.173) 0:01:18.304 ******** 2026-04-09 02:52:44.727379 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'})  2026-04-09 02:52:44.727386 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'})  2026-04-09 02:52:44.727393 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:44.727400 | orchestrator | 2026-04-09 02:52:44.727407 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-09 02:52:44.727414 | orchestrator | Thursday 09 April 2026 02:52:41 +0000 (0:00:00.175) 0:01:18.480 ******** 2026-04-09 02:52:44.727421 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'})  2026-04-09 02:52:44.727427 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'})  2026-04-09 02:52:44.727434 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:44.727441 | orchestrator | 2026-04-09 02:52:44.727448 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-09 02:52:44.727454 | orchestrator | Thursday 09 April 2026 02:52:42 +0000 (0:00:00.181) 0:01:18.661 ******** 2026-04-09 02:52:44.727461 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'})  2026-04-09 02:52:44.727468 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'})  2026-04-09 02:52:44.727474 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:44.727481 | orchestrator | 2026-04-09 02:52:44.727488 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-09 02:52:44.727495 | orchestrator | Thursday 09 April 2026 02:52:42 +0000 (0:00:00.177) 0:01:18.838 ******** 2026-04-09 02:52:44.727502 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:52:44.727509 | orchestrator | 2026-04-09 02:52:44.727515 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-09 02:52:44.727522 | orchestrator | Thursday 09 April 2026 02:52:43 +0000 (0:00:00.800) 0:01:19.639 ******** 2026-04-09 02:52:44.727529 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:52:44.727535 | orchestrator | 2026-04-09 02:52:44.727542 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-09 02:52:44.727550 | orchestrator | Thursday 09 April 2026 02:52:43 +0000 (0:00:00.559) 0:01:20.199 ******** 2026-04-09 02:52:44.727556 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:52:44.727563 | orchestrator | 2026-04-09 02:52:44.727570 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-09 02:52:44.727576 | orchestrator | Thursday 09 April 2026 02:52:43 +0000 (0:00:00.181) 0:01:20.380 ******** 2026-04-09 02:52:44.727590 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'vg_name': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'}) 2026-04-09 02:52:44.727598 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'vg_name': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'}) 2026-04-09 02:52:44.727604 | orchestrator | 2026-04-09 02:52:44.727611 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-09 02:52:44.727618 | orchestrator | Thursday 09 April 2026 02:52:43 +0000 (0:00:00.190) 0:01:20.570 ******** 2026-04-09 02:52:44.727641 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'})  2026-04-09 02:52:44.727654 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'})  2026-04-09 02:52:44.727662 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:44.727670 | orchestrator | 2026-04-09 02:52:44.727677 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-09 02:52:44.727684 | orchestrator | Thursday 09 April 2026 02:52:44 +0000 (0:00:00.188) 0:01:20.759 ******** 2026-04-09 02:52:44.727691 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'})  2026-04-09 02:52:44.727699 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'})  2026-04-09 02:52:44.727707 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:44.727714 | orchestrator | 2026-04-09 02:52:44.727721 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-09 02:52:44.727729 | orchestrator | Thursday 09 April 2026 02:52:44 +0000 (0:00:00.156) 0:01:20.915 ******** 2026-04-09 02:52:44.727736 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'})  2026-04-09 02:52:44.727742 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'})  2026-04-09 02:52:44.727750 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:52:44.727757 | orchestrator | 2026-04-09 02:52:44.727765 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-09 02:52:44.727772 | orchestrator | Thursday 09 April 2026 02:52:44 +0000 (0:00:00.155) 0:01:21.071 ******** 2026-04-09 02:52:44.727778 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 02:52:44.727786 | orchestrator |  "lvm_report": { 2026-04-09 02:52:44.727794 | orchestrator |  "lv": [ 2026-04-09 02:52:44.727801 | orchestrator |  { 2026-04-09 02:52:44.727808 | orchestrator |  "lv_name": "osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6", 2026-04-09 02:52:44.727816 | orchestrator |  "vg_name": "ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6" 2026-04-09 02:52:44.727823 | orchestrator |  }, 2026-04-09 02:52:44.727829 | orchestrator |  { 2026-04-09 02:52:44.727836 | orchestrator |  "lv_name": "osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e", 2026-04-09 02:52:44.727843 | orchestrator |  "vg_name": "ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e" 2026-04-09 02:52:44.727849 | orchestrator |  } 2026-04-09 02:52:44.727856 | orchestrator |  ], 2026-04-09 02:52:44.727862 | orchestrator |  "pv": [ 2026-04-09 02:52:44.727870 | orchestrator |  { 2026-04-09 02:52:44.727876 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-09 02:52:44.727883 | orchestrator |  "vg_name": "ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e" 2026-04-09 02:52:44.727890 | orchestrator |  }, 2026-04-09 02:52:44.727897 | orchestrator |  { 2026-04-09 02:52:44.727903 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-09 02:52:44.727918 | orchestrator |  "vg_name": "ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6" 2026-04-09 02:52:44.727925 | orchestrator |  } 2026-04-09 02:52:44.727931 | orchestrator |  ] 2026-04-09 02:52:44.727938 | orchestrator |  } 2026-04-09 02:52:44.727945 | orchestrator | } 2026-04-09 02:52:44.727952 | orchestrator | 2026-04-09 02:52:44.727959 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:52:44.727966 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-09 02:52:44.727973 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-09 02:52:44.727980 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-09 02:52:44.727986 | orchestrator | 2026-04-09 02:52:44.727993 | orchestrator | 2026-04-09 02:52:44.727999 | orchestrator | 2026-04-09 02:52:44.728006 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:52:44.728013 | orchestrator | Thursday 09 April 2026 02:52:44 +0000 (0:00:00.219) 0:01:21.291 ******** 2026-04-09 02:52:44.728020 | orchestrator | =============================================================================== 2026-04-09 02:52:44.728026 | orchestrator | Create block VGs -------------------------------------------------------- 5.86s 2026-04-09 02:52:44.728033 | orchestrator | Create block LVs -------------------------------------------------------- 4.33s 2026-04-09 02:52:44.728040 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.92s 2026-04-09 02:52:44.728046 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.85s 2026-04-09 02:52:44.728053 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.69s 2026-04-09 02:52:44.728060 | orchestrator | Add known links to the list of available block devices ------------------ 1.65s 2026-04-09 02:52:44.728066 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.63s 2026-04-09 02:52:44.728073 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.62s 2026-04-09 02:52:44.728084 | orchestrator | Add known partitions to the list of available block devices ------------- 1.47s 2026-04-09 02:52:45.197603 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.39s 2026-04-09 02:52:45.197691 | orchestrator | Add known links to the list of available block devices ------------------ 1.17s 2026-04-09 02:52:45.197700 | orchestrator | Add known links to the list of available block devices ------------------ 1.05s 2026-04-09 02:52:45.197724 | orchestrator | Print 'Create block LVs' ------------------------------------------------ 0.92s 2026-04-09 02:52:45.197731 | orchestrator | Calculate VG sizes (with buffer) ---------------------------------------- 0.86s 2026-04-09 02:52:45.197737 | orchestrator | Print LVM report data --------------------------------------------------- 0.86s 2026-04-09 02:52:45.197743 | orchestrator | Add known links to the list of available block devices ------------------ 0.83s 2026-04-09 02:52:45.197749 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2026-04-09 02:52:45.197756 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.81s 2026-04-09 02:52:45.197763 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.78s 2026-04-09 02:52:45.197770 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2026-04-09 02:52:57.857786 | orchestrator | 2026-04-09 02:52:57 | INFO  | Task 8b441457-ad80-4563-bbb8-d88f2da89be2 (facts) was prepared for execution. 2026-04-09 02:52:57.857879 | orchestrator | 2026-04-09 02:52:57 | INFO  | It takes a moment until task 8b441457-ad80-4563-bbb8-d88f2da89be2 (facts) has been started and output is visible here. 2026-04-09 02:53:12.427106 | orchestrator | 2026-04-09 02:53:12.427218 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-09 02:53:12.427261 | orchestrator | 2026-04-09 02:53:12.427409 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-09 02:53:12.427435 | orchestrator | Thursday 09 April 2026 02:53:02 +0000 (0:00:00.334) 0:00:00.334 ******** 2026-04-09 02:53:12.427447 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:53:12.427459 | orchestrator | ok: [testbed-manager] 2026-04-09 02:53:12.427470 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:53:12.427481 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:53:12.427492 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:53:12.427503 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:53:12.427513 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:53:12.427524 | orchestrator | 2026-04-09 02:53:12.427547 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-09 02:53:12.427558 | orchestrator | Thursday 09 April 2026 02:53:04 +0000 (0:00:01.384) 0:00:01.718 ******** 2026-04-09 02:53:12.427569 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:53:12.427581 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:53:12.427592 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:53:12.427603 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:53:12.427614 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:12.427625 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:53:12.427636 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:53:12.427648 | orchestrator | 2026-04-09 02:53:12.427668 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-09 02:53:12.427689 | orchestrator | 2026-04-09 02:53:12.427709 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 02:53:12.427730 | orchestrator | Thursday 09 April 2026 02:53:05 +0000 (0:00:01.458) 0:00:03.177 ******** 2026-04-09 02:53:12.427752 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:53:12.427774 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:53:12.427796 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:53:12.427818 | orchestrator | ok: [testbed-manager] 2026-04-09 02:53:12.427835 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:53:12.427878 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:53:12.427892 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:53:12.427904 | orchestrator | 2026-04-09 02:53:12.427917 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-09 02:53:12.427930 | orchestrator | 2026-04-09 02:53:12.427942 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-09 02:53:12.427955 | orchestrator | Thursday 09 April 2026 02:53:11 +0000 (0:00:05.749) 0:00:08.926 ******** 2026-04-09 02:53:12.427968 | orchestrator | skipping: [testbed-manager] 2026-04-09 02:53:12.427981 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:53:12.427993 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:53:12.428006 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:53:12.428018 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:12.428031 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:53:12.428044 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:53:12.428056 | orchestrator | 2026-04-09 02:53:12.428069 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 02:53:12.428083 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:53:12.428097 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:53:12.428110 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:53:12.428124 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:53:12.428137 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:53:12.428162 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:53:12.428175 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 02:53:12.428188 | orchestrator | 2026-04-09 02:53:12.428201 | orchestrator | 2026-04-09 02:53:12.428212 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 02:53:12.428242 | orchestrator | Thursday 09 April 2026 02:53:11 +0000 (0:00:00.697) 0:00:09.624 ******** 2026-04-09 02:53:12.428255 | orchestrator | =============================================================================== 2026-04-09 02:53:12.428295 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.75s 2026-04-09 02:53:12.428310 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.46s 2026-04-09 02:53:12.428322 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.38s 2026-04-09 02:53:12.428335 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.70s 2026-04-09 02:53:15.094779 | orchestrator | 2026-04-09 02:53:15 | INFO  | Task 2e565d6d-9656-4c94-9434-468e29f4d6ec (ceph) was prepared for execution. 2026-04-09 02:53:15.095825 | orchestrator | 2026-04-09 02:53:15 | INFO  | It takes a moment until task 2e565d6d-9656-4c94-9434-468e29f4d6ec (ceph) has been started and output is visible here. 2026-04-09 02:53:34.654331 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-09 02:53:34.654426 | orchestrator | 2.16.14 2026-04-09 02:53:34.654438 | orchestrator | 2026-04-09 02:53:34.654447 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-09 02:53:34.654455 | orchestrator | 2026-04-09 02:53:34.654463 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 02:53:34.654471 | orchestrator | Thursday 09 April 2026 02:53:20 +0000 (0:00:00.883) 0:00:00.883 ******** 2026-04-09 02:53:34.654479 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:53:34.654487 | orchestrator | 2026-04-09 02:53:34.654495 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-09 02:53:34.654502 | orchestrator | Thursday 09 April 2026 02:53:21 +0000 (0:00:01.264) 0:00:02.148 ******** 2026-04-09 02:53:34.654510 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:53:34.654518 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:53:34.654525 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:53:34.654532 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:53:34.654539 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:53:34.654547 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:53:34.654554 | orchestrator | 2026-04-09 02:53:34.654562 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-09 02:53:34.654569 | orchestrator | Thursday 09 April 2026 02:53:23 +0000 (0:00:01.319) 0:00:03.467 ******** 2026-04-09 02:53:34.654577 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:53:34.654584 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:53:34.654592 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:53:34.654599 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:53:34.654606 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:53:34.654613 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:53:34.654621 | orchestrator | 2026-04-09 02:53:34.654628 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 02:53:34.654635 | orchestrator | Thursday 09 April 2026 02:53:24 +0000 (0:00:00.867) 0:00:04.334 ******** 2026-04-09 02:53:34.654643 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:53:34.654650 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:53:34.654667 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:53:34.654674 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:53:34.654698 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:53:34.654706 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:53:34.654713 | orchestrator | 2026-04-09 02:53:34.654720 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 02:53:34.654727 | orchestrator | Thursday 09 April 2026 02:53:25 +0000 (0:00:00.986) 0:00:05.321 ******** 2026-04-09 02:53:34.654735 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:53:34.654742 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:53:34.654749 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:53:34.654756 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:53:34.654763 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:53:34.654770 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:53:34.654777 | orchestrator | 2026-04-09 02:53:34.654785 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-09 02:53:34.654792 | orchestrator | Thursday 09 April 2026 02:53:25 +0000 (0:00:00.866) 0:00:06.188 ******** 2026-04-09 02:53:34.654799 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:53:34.654806 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:53:34.654814 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:53:34.654821 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:53:34.654828 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:53:34.654835 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:53:34.654843 | orchestrator | 2026-04-09 02:53:34.654855 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-09 02:53:34.654867 | orchestrator | Thursday 09 April 2026 02:53:26 +0000 (0:00:00.695) 0:00:06.883 ******** 2026-04-09 02:53:34.654879 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:53:34.654891 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:53:34.654904 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:53:34.654917 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:53:34.654931 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:53:34.654944 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:53:34.654957 | orchestrator | 2026-04-09 02:53:34.654968 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-09 02:53:34.654977 | orchestrator | Thursday 09 April 2026 02:53:27 +0000 (0:00:00.922) 0:00:07.805 ******** 2026-04-09 02:53:34.654985 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:34.654993 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:53:34.655000 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:53:34.655007 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:53:34.655015 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:53:34.655022 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:53:34.655029 | orchestrator | 2026-04-09 02:53:34.655036 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-09 02:53:34.655044 | orchestrator | Thursday 09 April 2026 02:53:28 +0000 (0:00:00.675) 0:00:08.481 ******** 2026-04-09 02:53:34.655051 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:53:34.655058 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:53:34.655065 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:53:34.655072 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:53:34.655091 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:53:34.655099 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:53:34.655106 | orchestrator | 2026-04-09 02:53:34.655113 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-09 02:53:34.655120 | orchestrator | Thursday 09 April 2026 02:53:29 +0000 (0:00:00.867) 0:00:09.349 ******** 2026-04-09 02:53:34.655128 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 02:53:34.655135 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 02:53:34.655142 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 02:53:34.655149 | orchestrator | 2026-04-09 02:53:34.655156 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-09 02:53:34.655163 | orchestrator | Thursday 09 April 2026 02:53:29 +0000 (0:00:00.709) 0:00:10.059 ******** 2026-04-09 02:53:34.655177 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:53:34.655185 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:53:34.655192 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:53:34.655212 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:53:34.655219 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:53:34.655226 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:53:34.655233 | orchestrator | 2026-04-09 02:53:34.655241 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-09 02:53:34.655248 | orchestrator | Thursday 09 April 2026 02:53:30 +0000 (0:00:00.787) 0:00:10.847 ******** 2026-04-09 02:53:34.655255 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 02:53:34.655282 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 02:53:34.655291 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 02:53:34.655299 | orchestrator | 2026-04-09 02:53:34.655306 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-09 02:53:34.655313 | orchestrator | Thursday 09 April 2026 02:53:33 +0000 (0:00:02.486) 0:00:13.333 ******** 2026-04-09 02:53:34.655320 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-09 02:53:34.655328 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-09 02:53:34.655335 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-09 02:53:34.655342 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:34.655349 | orchestrator | 2026-04-09 02:53:34.655356 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-09 02:53:34.655364 | orchestrator | Thursday 09 April 2026 02:53:33 +0000 (0:00:00.453) 0:00:13.787 ******** 2026-04-09 02:53:34.655373 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 02:53:34.655383 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 02:53:34.655390 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 02:53:34.655397 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:34.655404 | orchestrator | 2026-04-09 02:53:34.655412 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-09 02:53:34.655419 | orchestrator | Thursday 09 April 2026 02:53:34 +0000 (0:00:00.636) 0:00:14.424 ******** 2026-04-09 02:53:34.655427 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:34.655437 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:34.655445 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:34.655458 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:34.655465 | orchestrator | 2026-04-09 02:53:34.655477 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-09 02:53:34.655484 | orchestrator | Thursday 09 April 2026 02:53:34 +0000 (0:00:00.200) 0:00:14.624 ******** 2026-04-09 02:53:34.655499 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-09 02:53:31.619178', 'end': '2026-04-09 02:53:31.663385', 'delta': '0:00:00.044207', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 02:53:46.516505 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-09 02:53:32.185392', 'end': '2026-04-09 02:53:32.222369', 'delta': '0:00:00.036977', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 02:53:46.516597 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-09 02:53:32.705464', 'end': '2026-04-09 02:53:32.751649', 'delta': '0:00:00.046185', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 02:53:46.516609 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:46.516620 | orchestrator | 2026-04-09 02:53:46.516629 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-09 02:53:46.516639 | orchestrator | Thursday 09 April 2026 02:53:34 +0000 (0:00:00.218) 0:00:14.843 ******** 2026-04-09 02:53:46.516647 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:53:46.516656 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:53:46.516664 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:53:46.516672 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:53:46.516680 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:53:46.516688 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:53:46.516696 | orchestrator | 2026-04-09 02:53:46.516704 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-09 02:53:46.516712 | orchestrator | Thursday 09 April 2026 02:53:35 +0000 (0:00:00.986) 0:00:15.830 ******** 2026-04-09 02:53:46.516720 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 02:53:46.516729 | orchestrator | 2026-04-09 02:53:46.516737 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-09 02:53:46.516745 | orchestrator | Thursday 09 April 2026 02:53:37 +0000 (0:00:01.881) 0:00:17.711 ******** 2026-04-09 02:53:46.516778 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:46.516786 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:53:46.516794 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:53:46.516802 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:53:46.516811 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:53:46.516825 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:53:46.516839 | orchestrator | 2026-04-09 02:53:46.516852 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-09 02:53:46.516864 | orchestrator | Thursday 09 April 2026 02:53:38 +0000 (0:00:00.883) 0:00:18.594 ******** 2026-04-09 02:53:46.516877 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:46.516889 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:53:46.516901 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:53:46.516914 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:53:46.516927 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:53:46.516939 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:53:46.516950 | orchestrator | 2026-04-09 02:53:46.516965 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 02:53:46.516977 | orchestrator | Thursday 09 April 2026 02:53:39 +0000 (0:00:01.276) 0:00:19.871 ******** 2026-04-09 02:53:46.516990 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:46.517002 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:53:46.517014 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:53:46.517026 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:53:46.517040 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:53:46.517070 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:53:46.517086 | orchestrator | 2026-04-09 02:53:46.517101 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-09 02:53:46.517116 | orchestrator | Thursday 09 April 2026 02:53:40 +0000 (0:00:00.666) 0:00:20.538 ******** 2026-04-09 02:53:46.517129 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:46.517142 | orchestrator | 2026-04-09 02:53:46.517156 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-09 02:53:46.517172 | orchestrator | Thursday 09 April 2026 02:53:40 +0000 (0:00:00.137) 0:00:20.675 ******** 2026-04-09 02:53:46.517187 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:46.517204 | orchestrator | 2026-04-09 02:53:46.517219 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 02:53:46.517233 | orchestrator | Thursday 09 April 2026 02:53:40 +0000 (0:00:00.235) 0:00:20.910 ******** 2026-04-09 02:53:46.517247 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:46.517285 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:53:46.517302 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:53:46.517312 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:53:46.517321 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:53:46.517330 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:53:46.517339 | orchestrator | 2026-04-09 02:53:46.517366 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-09 02:53:46.517376 | orchestrator | Thursday 09 April 2026 02:53:41 +0000 (0:00:00.903) 0:00:21.814 ******** 2026-04-09 02:53:46.517385 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:46.517394 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:53:46.517402 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:53:46.517410 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:53:46.517417 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:53:46.517425 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:53:46.517433 | orchestrator | 2026-04-09 02:53:46.517444 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-09 02:53:46.517456 | orchestrator | Thursday 09 April 2026 02:53:42 +0000 (0:00:00.680) 0:00:22.495 ******** 2026-04-09 02:53:46.517469 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:46.517482 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:53:46.517496 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:53:46.517521 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:53:46.517530 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:53:46.517538 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:53:46.517546 | orchestrator | 2026-04-09 02:53:46.517554 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-09 02:53:46.517562 | orchestrator | Thursday 09 April 2026 02:53:43 +0000 (0:00:00.932) 0:00:23.427 ******** 2026-04-09 02:53:46.517570 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:46.517577 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:53:46.517585 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:53:46.517593 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:53:46.517601 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:53:46.517608 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:53:46.517616 | orchestrator | 2026-04-09 02:53:46.517624 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-09 02:53:46.517632 | orchestrator | Thursday 09 April 2026 02:53:43 +0000 (0:00:00.663) 0:00:24.091 ******** 2026-04-09 02:53:46.517640 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:46.517647 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:53:46.517655 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:53:46.517663 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:53:46.517671 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:53:46.517679 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:53:46.517686 | orchestrator | 2026-04-09 02:53:46.517694 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-09 02:53:46.517702 | orchestrator | Thursday 09 April 2026 02:53:44 +0000 (0:00:00.885) 0:00:24.977 ******** 2026-04-09 02:53:46.517710 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:46.517718 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:53:46.517726 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:53:46.517734 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:53:46.517742 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:53:46.517749 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:53:46.517757 | orchestrator | 2026-04-09 02:53:46.517765 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-09 02:53:46.517774 | orchestrator | Thursday 09 April 2026 02:53:45 +0000 (0:00:00.686) 0:00:25.663 ******** 2026-04-09 02:53:46.517781 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:46.517789 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:53:46.517797 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:53:46.517805 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:53:46.517813 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:53:46.517821 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:53:46.517829 | orchestrator | 2026-04-09 02:53:46.517837 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-09 02:53:46.517845 | orchestrator | Thursday 09 April 2026 02:53:46 +0000 (0:00:00.890) 0:00:26.554 ******** 2026-04-09 02:53:46.517855 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f59a7c8--f88e--51a3--9620--37640e0ff9b5-osd--block--2f59a7c8--f88e--51a3--9620--37640e0ff9b5', 'dm-uuid-LVM-oNDzr1Rndp1i5vNhITRGHxSNPadq9yP2YYkPY1YV0VAfbaBkscBIapC0b1XJToPw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.517873 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1db77c01--2d77--5e1e--8d0a--4e535706b141-osd--block--1db77c01--2d77--5e1e--8d0a--4e535706b141', 'dm-uuid-LVM-Vd5rxgKUs73TU4Fbsf9r5IJx2JqOBc0dUhLn892OVFegRXfoAocTUnq1hUBwAqbQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.517894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.631178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.631375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.631392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.631402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.631411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.631419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.631427 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.631474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part1', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part14', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part15', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part16', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 02:53:46.631528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2f59a7c8--f88e--51a3--9620--37640e0ff9b5-osd--block--2f59a7c8--f88e--51a3--9620--37640e0ff9b5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-e4gHGq-azk6-pcuI-7Nw2-ZeJR-MqdR-RoIdn8', 'scsi-0QEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761', 'scsi-SQEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 02:53:46.631539 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1db77c01--2d77--5e1e--8d0a--4e535706b141-osd--block--1db77c01--2d77--5e1e--8d0a--4e535706b141'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0XRV4m-9HLY-z9SL-9jLS-42la-BMLC-ZGW5lb', 'scsi-0QEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad', 'scsi-SQEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 02:53:46.631548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be', 'scsi-SQEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 02:53:46.631568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-01-39-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 02:53:46.631585 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--68e90870--4763--57e7--8e76--63c40a6d6d6f-osd--block--68e90870--4763--57e7--8e76--63c40a6d6d6f', 'dm-uuid-LVM-T0ejETzGFepEIpL08MD1OeW49dsnvI7wKSROHCdR1c0EeKRnCg2dSklYcbDl2pDV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.839661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9961abb4--5e3b--57c6--b852--cf206941d3b6-osd--block--9961abb4--5e3b--57c6--b852--cf206941d3b6', 'dm-uuid-LVM-VSFf3ccEgp0wKL5jhz806Y1ipsT5ObrjT6G1Wcak0n2a6HcA1XF9dc3OSaf3QiXy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.839790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.839810 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.839822 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.839833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.839844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.839894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.839906 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.839917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.839929 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:46.839970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part1', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part14', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part15', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part16', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 02:53:46.840018 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--68e90870--4763--57e7--8e76--63c40a6d6d6f-osd--block--68e90870--4763--57e7--8e76--63c40a6d6d6f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HoNasn-5QxW-KcVM-fZxm-N33I-s5zp-OiEUUv', 'scsi-0QEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74', 'scsi-SQEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 02:53:46.840045 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9961abb4--5e3b--57c6--b852--cf206941d3b6-osd--block--9961abb4--5e3b--57c6--b852--cf206941d3b6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Q1Gc3L-FdZu-zuWJ-nsAv-cjJy-ZUop-F84tN5', 'scsi-0QEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf', 'scsi-SQEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 02:53:46.840066 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105', 'scsi-SQEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 02:53:46.944907 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-01-39-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 02:53:46.945037 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--27c4b53f--c2bf--5253--84b2--9319684e0f9e-osd--block--27c4b53f--c2bf--5253--84b2--9319684e0f9e', 'dm-uuid-LVM-Pc55YnmQ0VktCZR8gyBjVYB68xFeQ7SyIBYDKlrKL18ENKM1lOFBMzNh1V9LMTjE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.945067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6-osd--block--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6', 'dm-uuid-LVM-9vxFUtTAtak5ZkuWrKKfGIkUHDKTWX7JDm3eA8Y5r9JAYPgNze2tzfH9lV9upfLN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.945088 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.945143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.945182 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.945203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.945223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.945299 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.945324 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.945344 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:46.945374 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part1', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part14', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part15', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part16', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 02:53:46.945411 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--27c4b53f--c2bf--5253--84b2--9319684e0f9e-osd--block--27c4b53f--c2bf--5253--84b2--9319684e0f9e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mluLyS-UGtI-41vG-BPyx-ooVb-U8x0-Mwltl0', 'scsi-0QEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e', 'scsi-SQEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 02:53:46.945444 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6-osd--block--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BrMDw9-eTd2-RE46-4U0W-jzmp-yuNA-1uTxr3', 'scsi-0QEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4', 'scsi-SQEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 02:53:47.246575 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:53:47.246665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d', 'scsi-SQEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 02:53:47.246675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-01-39-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 02:53:47.246700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:47.246707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:47.246722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:47.246727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:47.246733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:47.246739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:47.246760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:47.246767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:47.246780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part1', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part14', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part15', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part16', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 02:53:47.246794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-01-39-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 02:53:47.246801 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:53:47.246807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:47.246814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:47.246825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:47.389699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:47.389807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:47.389814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:47.389818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:47.389833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:47.389838 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:53:47.389858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part1', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part14', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part15', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part16', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 02:53:47.389869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-01-39-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 02:53:47.389875 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:53:47.389879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:47.389883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:47.389891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:47.389895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:47.389899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:47.389903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:47.389908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:47.389916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 02:53:47.722973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part1', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part14', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part15', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part16', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 02:53:47.723105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-01-39-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 02:53:47.723135 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:53:47.723161 | orchestrator | 2026-04-09 02:53:47.723182 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-09 02:53:47.723204 | orchestrator | Thursday 09 April 2026 02:53:47 +0000 (0:00:01.137) 0:00:27.691 ******** 2026-04-09 02:53:47.723218 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f59a7c8--f88e--51a3--9620--37640e0ff9b5-osd--block--2f59a7c8--f88e--51a3--9620--37640e0ff9b5', 'dm-uuid-LVM-oNDzr1Rndp1i5vNhITRGHxSNPadq9yP2YYkPY1YV0VAfbaBkscBIapC0b1XJToPw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:47.723363 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1db77c01--2d77--5e1e--8d0a--4e535706b141-osd--block--1db77c01--2d77--5e1e--8d0a--4e535706b141', 'dm-uuid-LVM-Vd5rxgKUs73TU4Fbsf9r5IJx2JqOBc0dUhLn892OVFegRXfoAocTUnq1hUBwAqbQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:47.723381 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:47.723395 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:47.723416 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:47.723428 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:47.723448 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:47.723502 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--68e90870--4763--57e7--8e76--63c40a6d6d6f-osd--block--68e90870--4763--57e7--8e76--63c40a6d6d6f', 'dm-uuid-LVM-T0ejETzGFepEIpL08MD1OeW49dsnvI7wKSROHCdR1c0EeKRnCg2dSklYcbDl2pDV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:47.767182 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:47.767307 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9961abb4--5e3b--57c6--b852--cf206941d3b6-osd--block--9961abb4--5e3b--57c6--b852--cf206941d3b6', 'dm-uuid-LVM-VSFf3ccEgp0wKL5jhz806Y1ipsT5ObrjT6G1Wcak0n2a6HcA1XF9dc3OSaf3QiXy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:47.767354 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:47.767370 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:47.767379 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:47.767403 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:47.767437 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part1', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part14', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part15', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part16', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:47.767447 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2f59a7c8--f88e--51a3--9620--37640e0ff9b5-osd--block--2f59a7c8--f88e--51a3--9620--37640e0ff9b5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-e4gHGq-azk6-pcuI-7Nw2-ZeJR-MqdR-RoIdn8', 'scsi-0QEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761', 'scsi-SQEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:47.767455 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:47.767473 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1db77c01--2d77--5e1e--8d0a--4e535706b141-osd--block--1db77c01--2d77--5e1e--8d0a--4e535706b141'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0XRV4m-9HLY-z9SL-9jLS-42la-BMLC-ZGW5lb', 'scsi-0QEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad', 'scsi-SQEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.117555 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.117664 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be', 'scsi-SQEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.117678 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.117687 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-01-39-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.117712 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.117719 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.117744 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.117766 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part1', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part14', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part15', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part16', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.117783 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--68e90870--4763--57e7--8e76--63c40a6d6d6f-osd--block--68e90870--4763--57e7--8e76--63c40a6d6d6f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HoNasn-5QxW-KcVM-fZxm-N33I-s5zp-OiEUUv', 'scsi-0QEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74', 'scsi-SQEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.117793 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9961abb4--5e3b--57c6--b852--cf206941d3b6-osd--block--9961abb4--5e3b--57c6--b852--cf206941d3b6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Q1Gc3L-FdZu-zuWJ-nsAv-cjJy-ZUop-F84tN5', 'scsi-0QEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf', 'scsi-SQEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.373130 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105', 'scsi-SQEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.374142 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-01-39-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.374213 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:48.374230 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--27c4b53f--c2bf--5253--84b2--9319684e0f9e-osd--block--27c4b53f--c2bf--5253--84b2--9319684e0f9e', 'dm-uuid-LVM-Pc55YnmQ0VktCZR8gyBjVYB68xFeQ7SyIBYDKlrKL18ENKM1lOFBMzNh1V9LMTjE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.374243 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6-osd--block--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6', 'dm-uuid-LVM-9vxFUtTAtak5ZkuWrKKfGIkUHDKTWX7JDm3eA8Y5r9JAYPgNze2tzfH9lV9upfLN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.374254 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:53:48.374306 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.374343 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.374365 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.374377 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.374397 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.374408 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.374420 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.374431 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.374457 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.411653 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.411779 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.411796 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.411808 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.411819 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.411830 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.411861 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.411878 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part1', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part14', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part15', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part16', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.411964 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part1', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part14', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part15', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part16', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.650339 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-01-39-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.650429 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--27c4b53f--c2bf--5253--84b2--9319684e0f9e-osd--block--27c4b53f--c2bf--5253--84b2--9319684e0f9e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mluLyS-UGtI-41vG-BPyx-ooVb-U8x0-Mwltl0', 'scsi-0QEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e', 'scsi-SQEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.650444 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6-osd--block--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BrMDw9-eTd2-RE46-4U0W-jzmp-yuNA-1uTxr3', 'scsi-0QEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4', 'scsi-SQEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.650469 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d', 'scsi-SQEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.650499 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-01-39-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.650528 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:53:48.650539 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.650549 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.650557 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.650565 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.650573 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.650586 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.650614 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.885888 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.885975 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part1', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part14', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part15', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part16', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.886089 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-01-39-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.886105 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:53:48.886119 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:53:48.886148 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.886161 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.886171 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.886181 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.886191 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.886218 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.886232 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:48.886253 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:56.815134 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part1', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part14', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part15', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part16', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:56.815416 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-01-39-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 02:53:56.815460 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:53:56.815484 | orchestrator | 2026-04-09 02:53:56.815506 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-09 02:53:56.815527 | orchestrator | Thursday 09 April 2026 02:53:48 +0000 (0:00:01.385) 0:00:29.077 ******** 2026-04-09 02:53:56.815547 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:53:56.815562 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:53:56.815572 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:53:56.815583 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:53:56.815593 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:53:56.815604 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:53:56.815614 | orchestrator | 2026-04-09 02:53:56.815628 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-09 02:53:56.815641 | orchestrator | Thursday 09 April 2026 02:53:49 +0000 (0:00:00.980) 0:00:30.058 ******** 2026-04-09 02:53:56.815654 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:53:56.815666 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:53:56.815678 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:53:56.815691 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:53:56.815703 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:53:56.815714 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:53:56.815726 | orchestrator | 2026-04-09 02:53:56.815739 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 02:53:56.815752 | orchestrator | Thursday 09 April 2026 02:53:50 +0000 (0:00:00.933) 0:00:30.991 ******** 2026-04-09 02:53:56.815765 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:56.815777 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:53:56.815789 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:53:56.815823 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:53:56.815836 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:53:56.815848 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:53:56.815860 | orchestrator | 2026-04-09 02:53:56.815874 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 02:53:56.815887 | orchestrator | Thursday 09 April 2026 02:53:51 +0000 (0:00:00.679) 0:00:31.670 ******** 2026-04-09 02:53:56.815916 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:56.815937 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:53:56.815948 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:53:56.815959 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:53:56.815970 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:53:56.815980 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:53:56.815991 | orchestrator | 2026-04-09 02:53:56.816002 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 02:53:56.816013 | orchestrator | Thursday 09 April 2026 02:53:52 +0000 (0:00:00.916) 0:00:32.587 ******** 2026-04-09 02:53:56.816023 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:56.816034 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:53:56.816045 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:53:56.816068 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:53:56.816079 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:53:56.816089 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:53:56.816100 | orchestrator | 2026-04-09 02:53:56.816111 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 02:53:56.816121 | orchestrator | Thursday 09 April 2026 02:53:53 +0000 (0:00:00.662) 0:00:33.249 ******** 2026-04-09 02:53:56.816132 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:56.816143 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:53:56.816153 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:53:56.816164 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:53:56.816175 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:53:56.816185 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:53:56.816196 | orchestrator | 2026-04-09 02:53:56.816207 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 02:53:56.816218 | orchestrator | Thursday 09 April 2026 02:53:53 +0000 (0:00:00.937) 0:00:34.187 ******** 2026-04-09 02:53:56.816229 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-09 02:53:56.816240 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-09 02:53:56.816250 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-09 02:53:56.816312 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-09 02:53:56.816324 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-09 02:53:56.816335 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-09 02:53:56.816346 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 02:53:56.816356 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-09 02:53:56.816367 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-09 02:53:56.816377 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-09 02:53:56.816388 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-09 02:53:56.816399 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-09 02:53:56.816409 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-09 02:53:56.816420 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-09 02:53:56.816430 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-09 02:53:56.816441 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-09 02:53:56.816452 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-09 02:53:56.816470 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-09 02:53:56.816481 | orchestrator | 2026-04-09 02:53:56.816492 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 02:53:56.816503 | orchestrator | Thursday 09 April 2026 02:53:55 +0000 (0:00:01.758) 0:00:35.945 ******** 2026-04-09 02:53:56.816513 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-09 02:53:56.816525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-09 02:53:56.816535 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-09 02:53:56.816546 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:53:56.816557 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-09 02:53:56.816568 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-09 02:53:56.816579 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-09 02:53:56.816589 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:53:56.816600 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-09 02:53:56.816611 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-09 02:53:56.816621 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-09 02:53:56.816632 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:53:56.816643 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 02:53:56.816653 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 02:53:56.816671 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 02:53:56.816682 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:53:56.816693 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-09 02:53:56.816703 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-09 02:53:56.816714 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-09 02:53:56.816724 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:53:56.816735 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-09 02:53:56.816746 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-09 02:53:56.816756 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-09 02:53:56.816767 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:53:56.816778 | orchestrator | 2026-04-09 02:53:56.816789 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-09 02:53:56.816808 | orchestrator | Thursday 09 April 2026 02:53:56 +0000 (0:00:01.062) 0:00:37.007 ******** 2026-04-09 02:54:16.045359 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:54:16.045471 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:54:16.045494 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:54:16.045511 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 02:54:16.045526 | orchestrator | 2026-04-09 02:54:16.045540 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 02:54:16.045556 | orchestrator | Thursday 09 April 2026 02:53:57 +0000 (0:00:01.164) 0:00:38.172 ******** 2026-04-09 02:54:16.045572 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:54:16.045586 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:54:16.045600 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:54:16.045615 | orchestrator | 2026-04-09 02:54:16.045629 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 02:54:16.045645 | orchestrator | Thursday 09 April 2026 02:53:58 +0000 (0:00:00.395) 0:00:38.567 ******** 2026-04-09 02:54:16.045659 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:54:16.045674 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:54:16.045690 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:54:16.045706 | orchestrator | 2026-04-09 02:54:16.045718 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 02:54:16.045728 | orchestrator | Thursday 09 April 2026 02:53:58 +0000 (0:00:00.377) 0:00:38.945 ******** 2026-04-09 02:54:16.045737 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:54:16.045745 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:54:16.045754 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:54:16.045763 | orchestrator | 2026-04-09 02:54:16.045772 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 02:54:16.045781 | orchestrator | Thursday 09 April 2026 02:53:59 +0000 (0:00:00.572) 0:00:39.517 ******** 2026-04-09 02:54:16.045790 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:54:16.045799 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:54:16.045808 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:54:16.045817 | orchestrator | 2026-04-09 02:54:16.045826 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 02:54:16.045834 | orchestrator | Thursday 09 April 2026 02:53:59 +0000 (0:00:00.492) 0:00:40.009 ******** 2026-04-09 02:54:16.045844 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 02:54:16.045855 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 02:54:16.045865 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 02:54:16.045876 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:54:16.045886 | orchestrator | 2026-04-09 02:54:16.045896 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 02:54:16.045931 | orchestrator | Thursday 09 April 2026 02:54:00 +0000 (0:00:00.431) 0:00:40.441 ******** 2026-04-09 02:54:16.045942 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 02:54:16.045952 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 02:54:16.045963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 02:54:16.045974 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:54:16.045983 | orchestrator | 2026-04-09 02:54:16.045994 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 02:54:16.046009 | orchestrator | Thursday 09 April 2026 02:54:00 +0000 (0:00:00.401) 0:00:40.843 ******** 2026-04-09 02:54:16.046112 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 02:54:16.046129 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 02:54:16.046144 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 02:54:16.046158 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:54:16.046173 | orchestrator | 2026-04-09 02:54:16.046188 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 02:54:16.046203 | orchestrator | Thursday 09 April 2026 02:54:01 +0000 (0:00:00.413) 0:00:41.256 ******** 2026-04-09 02:54:16.046212 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:54:16.046221 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:54:16.046230 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:54:16.046238 | orchestrator | 2026-04-09 02:54:16.046247 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 02:54:16.046255 | orchestrator | Thursday 09 April 2026 02:54:01 +0000 (0:00:00.374) 0:00:41.631 ******** 2026-04-09 02:54:16.046350 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-09 02:54:16.046360 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-09 02:54:16.046369 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-09 02:54:16.046378 | orchestrator | 2026-04-09 02:54:16.046386 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-09 02:54:16.046395 | orchestrator | Thursday 09 April 2026 02:54:02 +0000 (0:00:01.114) 0:00:42.745 ******** 2026-04-09 02:54:16.046404 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 02:54:16.046413 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 02:54:16.046422 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 02:54:16.046431 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-09 02:54:16.046439 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 02:54:16.046448 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 02:54:16.046457 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 02:54:16.046465 | orchestrator | 2026-04-09 02:54:16.046473 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-09 02:54:16.046482 | orchestrator | Thursday 09 April 2026 02:54:03 +0000 (0:00:00.898) 0:00:43.644 ******** 2026-04-09 02:54:16.046509 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 02:54:16.046519 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 02:54:16.046528 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 02:54:16.046536 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-09 02:54:16.046545 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 02:54:16.046553 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 02:54:16.046562 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 02:54:16.046570 | orchestrator | 2026-04-09 02:54:16.046589 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 02:54:16.046598 | orchestrator | Thursday 09 April 2026 02:54:05 +0000 (0:00:02.116) 0:00:45.760 ******** 2026-04-09 02:54:16.046607 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:54:16.046617 | orchestrator | 2026-04-09 02:54:16.046625 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 02:54:16.046634 | orchestrator | Thursday 09 April 2026 02:54:06 +0000 (0:00:01.355) 0:00:47.116 ******** 2026-04-09 02:54:16.046642 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:54:16.046651 | orchestrator | 2026-04-09 02:54:16.046660 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 02:54:16.046668 | orchestrator | Thursday 09 April 2026 02:54:08 +0000 (0:00:01.290) 0:00:48.407 ******** 2026-04-09 02:54:16.046677 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:54:16.046686 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:54:16.046694 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:54:16.046703 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:54:16.046711 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:54:16.046720 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:54:16.046728 | orchestrator | 2026-04-09 02:54:16.046737 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 02:54:16.046745 | orchestrator | Thursday 09 April 2026 02:54:09 +0000 (0:00:01.342) 0:00:49.749 ******** 2026-04-09 02:54:16.046754 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:54:16.046762 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:54:16.046771 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:54:16.046779 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:54:16.046787 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:54:16.046796 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:54:16.046804 | orchestrator | 2026-04-09 02:54:16.046813 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 02:54:16.046821 | orchestrator | Thursday 09 April 2026 02:54:10 +0000 (0:00:00.754) 0:00:50.504 ******** 2026-04-09 02:54:16.046830 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:54:16.046838 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:54:16.046847 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:54:16.046855 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:54:16.046864 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:54:16.046879 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:54:16.046888 | orchestrator | 2026-04-09 02:54:16.046896 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 02:54:16.046905 | orchestrator | Thursday 09 April 2026 02:54:11 +0000 (0:00:00.960) 0:00:51.464 ******** 2026-04-09 02:54:16.046913 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:54:16.046922 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:54:16.046930 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:54:16.046938 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:54:16.046947 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:54:16.046955 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:54:16.046964 | orchestrator | 2026-04-09 02:54:16.046973 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 02:54:16.046981 | orchestrator | Thursday 09 April 2026 02:54:11 +0000 (0:00:00.729) 0:00:52.194 ******** 2026-04-09 02:54:16.046989 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:54:16.046998 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:54:16.047007 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:54:16.047015 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:54:16.047024 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:54:16.047033 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:54:16.047041 | orchestrator | 2026-04-09 02:54:16.047050 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 02:54:16.047062 | orchestrator | Thursday 09 April 2026 02:54:13 +0000 (0:00:01.361) 0:00:53.555 ******** 2026-04-09 02:54:16.047071 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:54:16.047080 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:54:16.047088 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:54:16.047097 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:54:16.047105 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:54:16.047113 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:54:16.047122 | orchestrator | 2026-04-09 02:54:16.047131 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 02:54:16.047139 | orchestrator | Thursday 09 April 2026 02:54:14 +0000 (0:00:00.662) 0:00:54.218 ******** 2026-04-09 02:54:16.047148 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:54:16.047156 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:54:16.047167 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:54:16.047182 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:54:16.047197 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:54:16.047212 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:54:16.047226 | orchestrator | 2026-04-09 02:54:16.047241 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 02:54:16.047256 | orchestrator | Thursday 09 April 2026 02:54:14 +0000 (0:00:00.956) 0:00:55.175 ******** 2026-04-09 02:54:16.047299 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:54:16.047323 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:54:36.600700 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:54:36.600808 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:54:36.600818 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:54:36.600826 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:54:36.600833 | orchestrator | 2026-04-09 02:54:36.600842 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 02:54:36.600852 | orchestrator | Thursday 09 April 2026 02:54:16 +0000 (0:00:01.061) 0:00:56.236 ******** 2026-04-09 02:54:36.600859 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:54:36.600866 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:54:36.600873 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:54:36.600881 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:54:36.600889 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:54:36.600896 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:54:36.600902 | orchestrator | 2026-04-09 02:54:36.600910 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 02:54:36.600917 | orchestrator | Thursday 09 April 2026 02:54:17 +0000 (0:00:01.350) 0:00:57.586 ******** 2026-04-09 02:54:36.600925 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:54:36.600935 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:54:36.600946 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:54:36.600956 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:54:36.600966 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:54:36.600972 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:54:36.600980 | orchestrator | 2026-04-09 02:54:36.600989 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 02:54:36.600997 | orchestrator | Thursday 09 April 2026 02:54:18 +0000 (0:00:00.627) 0:00:58.214 ******** 2026-04-09 02:54:36.601005 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:54:36.601015 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:54:36.601022 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:54:36.601029 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:54:36.601036 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:54:36.601043 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:54:36.601050 | orchestrator | 2026-04-09 02:54:36.601056 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 02:54:36.601063 | orchestrator | Thursday 09 April 2026 02:54:18 +0000 (0:00:00.952) 0:00:59.167 ******** 2026-04-09 02:54:36.601070 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:54:36.601104 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:54:36.601111 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:54:36.601118 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:54:36.601125 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:54:36.601131 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:54:36.601138 | orchestrator | 2026-04-09 02:54:36.601145 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 02:54:36.601152 | orchestrator | Thursday 09 April 2026 02:54:19 +0000 (0:00:00.696) 0:00:59.863 ******** 2026-04-09 02:54:36.601163 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:54:36.601170 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:54:36.601178 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:54:36.601185 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:54:36.601191 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:54:36.601198 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:54:36.601205 | orchestrator | 2026-04-09 02:54:36.601213 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 02:54:36.601221 | orchestrator | Thursday 09 April 2026 02:54:20 +0000 (0:00:00.915) 0:01:00.779 ******** 2026-04-09 02:54:36.601229 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:54:36.601236 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:54:36.601243 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:54:36.601250 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:54:36.601372 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:54:36.601398 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:54:36.601407 | orchestrator | 2026-04-09 02:54:36.601416 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 02:54:36.601424 | orchestrator | Thursday 09 April 2026 02:54:21 +0000 (0:00:00.675) 0:01:01.454 ******** 2026-04-09 02:54:36.601432 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:54:36.601439 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:54:36.601446 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:54:36.601454 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:54:36.601461 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:54:36.601469 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:54:36.601477 | orchestrator | 2026-04-09 02:54:36.601484 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 02:54:36.601492 | orchestrator | Thursday 09 April 2026 02:54:22 +0000 (0:00:00.860) 0:01:02.314 ******** 2026-04-09 02:54:36.601500 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:54:36.601509 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:54:36.601515 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:54:36.601523 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:54:36.601530 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:54:36.601538 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:54:36.601545 | orchestrator | 2026-04-09 02:54:36.601553 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 02:54:36.601560 | orchestrator | Thursday 09 April 2026 02:54:22 +0000 (0:00:00.640) 0:01:02.955 ******** 2026-04-09 02:54:36.601567 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:54:36.601575 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:54:36.601582 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:54:36.601589 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:54:36.601596 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:54:36.601603 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:54:36.601610 | orchestrator | 2026-04-09 02:54:36.601617 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 02:54:36.601625 | orchestrator | Thursday 09 April 2026 02:54:23 +0000 (0:00:00.886) 0:01:03.841 ******** 2026-04-09 02:54:36.601632 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:54:36.601639 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:54:36.601646 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:54:36.601653 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:54:36.601660 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:54:36.601667 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:54:36.601685 | orchestrator | 2026-04-09 02:54:36.601692 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 02:54:36.601700 | orchestrator | Thursday 09 April 2026 02:54:24 +0000 (0:00:00.680) 0:01:04.522 ******** 2026-04-09 02:54:36.601707 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:54:36.601738 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:54:36.601746 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:54:36.601754 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:54:36.601761 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:54:36.601768 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:54:36.601776 | orchestrator | 2026-04-09 02:54:36.601784 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-09 02:54:36.601791 | orchestrator | Thursday 09 April 2026 02:54:25 +0000 (0:00:01.436) 0:01:05.958 ******** 2026-04-09 02:54:36.601799 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:54:36.601807 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:54:36.601815 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:54:36.601822 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:54:36.601839 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:54:36.601844 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:54:36.601849 | orchestrator | 2026-04-09 02:54:36.601854 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-09 02:54:36.601858 | orchestrator | Thursday 09 April 2026 02:54:27 +0000 (0:00:01.885) 0:01:07.843 ******** 2026-04-09 02:54:36.601863 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:54:36.601867 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:54:36.601872 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:54:36.601876 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:54:36.601881 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:54:36.601885 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:54:36.601890 | orchestrator | 2026-04-09 02:54:36.601895 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-09 02:54:36.601899 | orchestrator | Thursday 09 April 2026 02:54:30 +0000 (0:00:02.377) 0:01:10.221 ******** 2026-04-09 02:54:36.601905 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:54:36.601911 | orchestrator | 2026-04-09 02:54:36.601916 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-09 02:54:36.601920 | orchestrator | Thursday 09 April 2026 02:54:31 +0000 (0:00:01.360) 0:01:11.581 ******** 2026-04-09 02:54:36.601925 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:54:36.601929 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:54:36.601934 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:54:36.601938 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:54:36.601945 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:54:36.601952 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:54:36.601960 | orchestrator | 2026-04-09 02:54:36.601965 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-09 02:54:36.601970 | orchestrator | Thursday 09 April 2026 02:54:32 +0000 (0:00:00.694) 0:01:12.276 ******** 2026-04-09 02:54:36.601974 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:54:36.601979 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:54:36.601983 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:54:36.601988 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:54:36.601992 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:54:36.601997 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:54:36.602001 | orchestrator | 2026-04-09 02:54:36.602006 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-09 02:54:36.602010 | orchestrator | Thursday 09 April 2026 02:54:32 +0000 (0:00:00.903) 0:01:13.180 ******** 2026-04-09 02:54:36.602088 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 02:54:36.602099 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 02:54:36.602110 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 02:54:36.602115 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 02:54:36.602119 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 02:54:36.602124 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 02:54:36.602129 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 02:54:36.602134 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 02:54:36.602139 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 02:54:36.602143 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 02:54:36.602148 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 02:54:36.602152 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 02:54:36.602157 | orchestrator | 2026-04-09 02:54:36.602161 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-09 02:54:36.602166 | orchestrator | Thursday 09 April 2026 02:54:34 +0000 (0:00:01.432) 0:01:14.613 ******** 2026-04-09 02:54:36.602170 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:54:36.602175 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:54:36.602179 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:54:36.602184 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:54:36.602188 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:54:36.602193 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:54:36.602197 | orchestrator | 2026-04-09 02:54:36.602202 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-09 02:54:36.602206 | orchestrator | Thursday 09 April 2026 02:54:35 +0000 (0:00:01.480) 0:01:16.093 ******** 2026-04-09 02:54:36.602211 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:54:36.602215 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:54:36.602220 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:54:36.602224 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:54:36.602229 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:54:36.602234 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:54:36.602240 | orchestrator | 2026-04-09 02:54:36.602275 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-09 02:55:58.367454 | orchestrator | Thursday 09 April 2026 02:54:36 +0000 (0:00:00.699) 0:01:16.792 ******** 2026-04-09 02:55:58.367593 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:55:58.367617 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:55:58.367632 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:55:58.367646 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:55:58.367661 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:55:58.367675 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:55:58.367688 | orchestrator | 2026-04-09 02:55:58.367704 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-09 02:55:58.367719 | orchestrator | Thursday 09 April 2026 02:54:37 +0000 (0:00:00.944) 0:01:17.737 ******** 2026-04-09 02:55:58.367732 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:55:58.367744 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:55:58.367757 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:55:58.367772 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:55:58.367786 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:55:58.367801 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:55:58.367813 | orchestrator | 2026-04-09 02:55:58.367826 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-09 02:55:58.367839 | orchestrator | Thursday 09 April 2026 02:54:38 +0000 (0:00:00.659) 0:01:18.396 ******** 2026-04-09 02:55:58.367883 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:55:58.367899 | orchestrator | 2026-04-09 02:55:58.367913 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-09 02:55:58.367925 | orchestrator | Thursday 09 April 2026 02:54:39 +0000 (0:00:01.409) 0:01:19.806 ******** 2026-04-09 02:55:58.367938 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:55:58.367952 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:55:58.367965 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:55:58.367979 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:55:58.367992 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:55:58.368004 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:55:58.368016 | orchestrator | 2026-04-09 02:55:58.368029 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-09 02:55:58.368043 | orchestrator | Thursday 09 April 2026 02:55:44 +0000 (0:01:04.812) 0:02:24.618 ******** 2026-04-09 02:55:58.368055 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 02:55:58.368068 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 02:55:58.368109 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 02:55:58.368123 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:55:58.368136 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 02:55:58.368149 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 02:55:58.368163 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 02:55:58.368176 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:55:58.368189 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 02:55:58.368202 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 02:55:58.368233 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 02:55:58.368247 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:55:58.368261 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 02:55:58.368274 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 02:55:58.368287 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 02:55:58.368300 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:55:58.368314 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 02:55:58.368326 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 02:55:58.368339 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 02:55:58.368351 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:55:58.368365 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 02:55:58.368377 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 02:55:58.368390 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 02:55:58.368403 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:55:58.368416 | orchestrator | 2026-04-09 02:55:58.368430 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-09 02:55:58.368443 | orchestrator | Thursday 09 April 2026 02:55:45 +0000 (0:00:00.825) 0:02:25.444 ******** 2026-04-09 02:55:58.368456 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:55:58.368468 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:55:58.368481 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:55:58.368494 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:55:58.368507 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:55:58.368536 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:55:58.368550 | orchestrator | 2026-04-09 02:55:58.368563 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-09 02:55:58.368576 | orchestrator | Thursday 09 April 2026 02:55:46 +0000 (0:00:00.873) 0:02:26.317 ******** 2026-04-09 02:55:58.368588 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:55:58.368602 | orchestrator | 2026-04-09 02:55:58.368615 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-09 02:55:58.368628 | orchestrator | Thursday 09 April 2026 02:55:46 +0000 (0:00:00.167) 0:02:26.484 ******** 2026-04-09 02:55:58.368640 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:55:58.368675 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:55:58.368690 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:55:58.368703 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:55:58.368716 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:55:58.368728 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:55:58.368739 | orchestrator | 2026-04-09 02:55:58.368751 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-09 02:55:58.368762 | orchestrator | Thursday 09 April 2026 02:55:46 +0000 (0:00:00.702) 0:02:27.187 ******** 2026-04-09 02:55:58.368774 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:55:58.368786 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:55:58.368797 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:55:58.368810 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:55:58.368822 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:55:58.368835 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:55:58.368846 | orchestrator | 2026-04-09 02:55:58.368858 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-09 02:55:58.368870 | orchestrator | Thursday 09 April 2026 02:55:47 +0000 (0:00:00.958) 0:02:28.146 ******** 2026-04-09 02:55:58.368881 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:55:58.368894 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:55:58.368907 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:55:58.368920 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:55:58.368932 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:55:58.368944 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:55:58.368956 | orchestrator | 2026-04-09 02:55:58.368968 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-09 02:55:58.368982 | orchestrator | Thursday 09 April 2026 02:55:48 +0000 (0:00:00.701) 0:02:28.848 ******** 2026-04-09 02:55:58.368996 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:55:58.369010 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:55:58.369023 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:55:58.369037 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:55:58.369049 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:55:58.369061 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:55:58.369072 | orchestrator | 2026-04-09 02:55:58.369203 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-09 02:55:58.369219 | orchestrator | Thursday 09 April 2026 02:55:52 +0000 (0:00:03.397) 0:02:32.245 ******** 2026-04-09 02:55:58.369232 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:55:58.369245 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:55:58.369257 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:55:58.369270 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:55:58.369283 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:55:58.369295 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:55:58.369309 | orchestrator | 2026-04-09 02:55:58.369321 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-09 02:55:58.369332 | orchestrator | Thursday 09 April 2026 02:55:52 +0000 (0:00:00.679) 0:02:32.925 ******** 2026-04-09 02:55:58.369346 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:55:58.369360 | orchestrator | 2026-04-09 02:55:58.369372 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-09 02:55:58.369401 | orchestrator | Thursday 09 April 2026 02:55:54 +0000 (0:00:01.422) 0:02:34.347 ******** 2026-04-09 02:55:58.369412 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:55:58.369425 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:55:58.369437 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:55:58.369459 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:55:58.369473 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:55:58.369487 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:55:58.369499 | orchestrator | 2026-04-09 02:55:58.369512 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-09 02:55:58.369524 | orchestrator | Thursday 09 April 2026 02:55:55 +0000 (0:00:01.004) 0:02:35.352 ******** 2026-04-09 02:55:58.369536 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:55:58.369549 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:55:58.369561 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:55:58.369573 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:55:58.369585 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:55:58.369598 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:55:58.369611 | orchestrator | 2026-04-09 02:55:58.369623 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-09 02:55:58.369636 | orchestrator | Thursday 09 April 2026 02:55:55 +0000 (0:00:00.677) 0:02:36.029 ******** 2026-04-09 02:55:58.369649 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:55:58.369662 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:55:58.369676 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:55:58.369688 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:55:58.369701 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:55:58.369713 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:55:58.369725 | orchestrator | 2026-04-09 02:55:58.369738 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-09 02:55:58.369751 | orchestrator | Thursday 09 April 2026 02:55:56 +0000 (0:00:00.978) 0:02:37.008 ******** 2026-04-09 02:55:58.369763 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:55:58.369776 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:55:58.369787 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:55:58.369799 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:55:58.369812 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:55:58.369824 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:55:58.369837 | orchestrator | 2026-04-09 02:55:58.369850 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-09 02:55:58.369863 | orchestrator | Thursday 09 April 2026 02:55:57 +0000 (0:00:00.622) 0:02:37.630 ******** 2026-04-09 02:55:58.369877 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:55:58.369889 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:55:58.369902 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:55:58.369914 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:55:58.369928 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:55:58.369942 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:55:58.369956 | orchestrator | 2026-04-09 02:55:58.369970 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-09 02:55:58.370004 | orchestrator | Thursday 09 April 2026 02:55:58 +0000 (0:00:00.930) 0:02:38.561 ******** 2026-04-09 02:56:10.409975 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:56:10.410178 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:56:10.410204 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:56:10.410218 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:10.410233 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:10.410249 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:10.410265 | orchestrator | 2026-04-09 02:56:10.410284 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-09 02:56:10.410301 | orchestrator | Thursday 09 April 2026 02:55:59 +0000 (0:00:00.697) 0:02:39.258 ******** 2026-04-09 02:56:10.410346 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:56:10.410357 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:56:10.410366 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:56:10.410375 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:10.410384 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:10.410393 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:10.410401 | orchestrator | 2026-04-09 02:56:10.410411 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-09 02:56:10.410420 | orchestrator | Thursday 09 April 2026 02:55:59 +0000 (0:00:00.918) 0:02:40.176 ******** 2026-04-09 02:56:10.410429 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:56:10.410437 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:56:10.410446 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:56:10.410457 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:10.410471 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:10.410496 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:10.410510 | orchestrator | 2026-04-09 02:56:10.410524 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-09 02:56:10.410538 | orchestrator | Thursday 09 April 2026 02:56:00 +0000 (0:00:00.694) 0:02:40.871 ******** 2026-04-09 02:56:10.410552 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:56:10.410568 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:56:10.410582 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:56:10.410596 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:56:10.410611 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:56:10.410626 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:56:10.410640 | orchestrator | 2026-04-09 02:56:10.410656 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-09 02:56:10.410671 | orchestrator | Thursday 09 April 2026 02:56:02 +0000 (0:00:01.491) 0:02:42.362 ******** 2026-04-09 02:56:10.410687 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:56:10.410705 | orchestrator | 2026-04-09 02:56:10.410715 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-09 02:56:10.410724 | orchestrator | Thursday 09 April 2026 02:56:03 +0000 (0:00:01.581) 0:02:43.944 ******** 2026-04-09 02:56:10.410733 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-09 02:56:10.410742 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-09 02:56:10.410750 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-09 02:56:10.410759 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-09 02:56:10.410768 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-09 02:56:10.410776 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-09 02:56:10.410785 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-09 02:56:10.410807 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-09 02:56:10.410816 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-09 02:56:10.410824 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-09 02:56:10.410833 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-09 02:56:10.410842 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-09 02:56:10.410850 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-09 02:56:10.410859 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-09 02:56:10.410868 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-09 02:56:10.410876 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-09 02:56:10.410885 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-09 02:56:10.410894 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-09 02:56:10.410903 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-09 02:56:10.410921 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-09 02:56:10.410930 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-09 02:56:10.410938 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-09 02:56:10.410947 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-09 02:56:10.410956 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-09 02:56:10.410965 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-09 02:56:10.410973 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-09 02:56:10.410982 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-09 02:56:10.410991 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-09 02:56:10.410999 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-09 02:56:10.411008 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-09 02:56:10.411016 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-09 02:56:10.411025 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-09 02:56:10.411073 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-09 02:56:10.411085 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-09 02:56:10.411094 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-09 02:56:10.411121 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-09 02:56:10.411130 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-09 02:56:10.411139 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-09 02:56:10.411148 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-09 02:56:10.411156 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-09 02:56:10.411165 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-09 02:56:10.411174 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-09 02:56:10.411183 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-09 02:56:10.411191 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-09 02:56:10.411200 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-09 02:56:10.411208 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 02:56:10.411217 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 02:56:10.411226 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-09 02:56:10.411234 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-09 02:56:10.411243 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 02:56:10.411251 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-09 02:56:10.411260 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 02:56:10.411269 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 02:56:10.411277 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 02:56:10.411291 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 02:56:10.411311 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 02:56:10.411329 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 02:56:10.411343 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 02:56:10.411357 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 02:56:10.411371 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 02:56:10.411385 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 02:56:10.411409 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 02:56:10.411423 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 02:56:10.411436 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 02:56:10.411450 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 02:56:10.411465 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 02:56:10.411487 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 02:56:10.411503 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 02:56:10.411518 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 02:56:10.411532 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 02:56:10.411544 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 02:56:10.411553 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 02:56:10.411561 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 02:56:10.411570 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 02:56:10.411579 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 02:56:10.411587 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 02:56:10.411596 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 02:56:10.411604 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 02:56:10.411613 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 02:56:10.411622 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 02:56:10.411630 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-09 02:56:10.411639 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-09 02:56:10.411648 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 02:56:10.411656 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 02:56:10.411665 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-09 02:56:10.411673 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 02:56:10.411682 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-09 02:56:10.411691 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-09 02:56:10.411699 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-09 02:56:10.411708 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 02:56:10.411717 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-09 02:56:10.411735 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-09 02:56:26.502435 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-09 02:56:26.502565 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-09 02:56:26.502582 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-09 02:56:26.502593 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-09 02:56:26.502604 | orchestrator | 2026-04-09 02:56:26.502615 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-09 02:56:26.502626 | orchestrator | Thursday 09 April 2026 02:56:10 +0000 (0:00:06.607) 0:02:50.552 ******** 2026-04-09 02:56:26.502636 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:26.502647 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:26.502657 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:26.502669 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 02:56:26.502701 | orchestrator | 2026-04-09 02:56:26.502712 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-09 02:56:26.502722 | orchestrator | Thursday 09 April 2026 02:56:11 +0000 (0:00:01.138) 0:02:51.690 ******** 2026-04-09 02:56:26.502732 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 02:56:26.502742 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 02:56:26.502753 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 02:56:26.502762 | orchestrator | 2026-04-09 02:56:26.502772 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-09 02:56:26.502782 | orchestrator | Thursday 09 April 2026 02:56:12 +0000 (0:00:00.785) 0:02:52.476 ******** 2026-04-09 02:56:26.502791 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 02:56:26.502801 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 02:56:26.502811 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 02:56:26.502820 | orchestrator | 2026-04-09 02:56:26.502830 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-09 02:56:26.502840 | orchestrator | Thursday 09 April 2026 02:56:13 +0000 (0:00:01.264) 0:02:53.741 ******** 2026-04-09 02:56:26.502849 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:56:26.502859 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:56:26.502869 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:56:26.502878 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:26.502888 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:26.502897 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:26.502907 | orchestrator | 2026-04-09 02:56:26.502917 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-09 02:56:26.502941 | orchestrator | Thursday 09 April 2026 02:56:14 +0000 (0:00:00.869) 0:02:54.610 ******** 2026-04-09 02:56:26.502953 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:56:26.502964 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:56:26.503003 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:56:26.503015 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:26.503024 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:26.503033 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:26.503043 | orchestrator | 2026-04-09 02:56:26.503052 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-09 02:56:26.503061 | orchestrator | Thursday 09 April 2026 02:56:15 +0000 (0:00:00.695) 0:02:55.306 ******** 2026-04-09 02:56:26.503070 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:56:26.503079 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:56:26.503088 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:56:26.503098 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:26.503107 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:26.503116 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:26.503125 | orchestrator | 2026-04-09 02:56:26.503134 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-09 02:56:26.503143 | orchestrator | Thursday 09 April 2026 02:56:16 +0000 (0:00:00.922) 0:02:56.228 ******** 2026-04-09 02:56:26.503152 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:56:26.503161 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:56:26.503170 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:56:26.503179 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:26.503189 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:26.503204 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:26.503214 | orchestrator | 2026-04-09 02:56:26.503223 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-09 02:56:26.503232 | orchestrator | Thursday 09 April 2026 02:56:16 +0000 (0:00:00.707) 0:02:56.936 ******** 2026-04-09 02:56:26.503242 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:56:26.503250 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:56:26.503260 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:56:26.503269 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:26.503278 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:26.503287 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:26.503294 | orchestrator | 2026-04-09 02:56:26.503302 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-09 02:56:26.503310 | orchestrator | Thursday 09 April 2026 02:56:17 +0000 (0:00:00.930) 0:02:57.867 ******** 2026-04-09 02:56:26.503318 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:56:26.503326 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:56:26.503337 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:56:26.503351 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:26.503383 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:26.503398 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:26.503412 | orchestrator | 2026-04-09 02:56:26.503427 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-09 02:56:26.503441 | orchestrator | Thursday 09 April 2026 02:56:18 +0000 (0:00:00.649) 0:02:58.516 ******** 2026-04-09 02:56:26.503455 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:56:26.503468 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:56:26.503482 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:56:26.503494 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:26.503508 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:26.503522 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:26.503534 | orchestrator | 2026-04-09 02:56:26.503547 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-09 02:56:26.503560 | orchestrator | Thursday 09 April 2026 02:56:19 +0000 (0:00:00.984) 0:02:59.500 ******** 2026-04-09 02:56:26.503572 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:56:26.503584 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:56:26.503630 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:56:26.503644 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:26.503655 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:26.503667 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:26.503678 | orchestrator | 2026-04-09 02:56:26.503690 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-09 02:56:26.503702 | orchestrator | Thursday 09 April 2026 02:56:19 +0000 (0:00:00.663) 0:03:00.164 ******** 2026-04-09 02:56:26.503713 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:26.503724 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:26.503736 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:26.503747 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:56:26.503759 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:56:26.503771 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:56:26.503782 | orchestrator | 2026-04-09 02:56:26.503794 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-09 02:56:26.503806 | orchestrator | Thursday 09 April 2026 02:56:22 +0000 (0:00:03.014) 0:03:03.178 ******** 2026-04-09 02:56:26.503818 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:56:26.503829 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:56:26.503841 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:56:26.503853 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:26.503865 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:26.503876 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:26.503888 | orchestrator | 2026-04-09 02:56:26.503900 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-09 02:56:26.503921 | orchestrator | Thursday 09 April 2026 02:56:23 +0000 (0:00:00.652) 0:03:03.831 ******** 2026-04-09 02:56:26.503934 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:56:26.503946 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:56:26.503958 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:56:26.503970 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:26.504005 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:26.504018 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:26.504031 | orchestrator | 2026-04-09 02:56:26.504044 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-09 02:56:26.504057 | orchestrator | Thursday 09 April 2026 02:56:24 +0000 (0:00:00.996) 0:03:04.828 ******** 2026-04-09 02:56:26.504070 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:56:26.504084 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:56:26.504106 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:56:26.504119 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:26.504132 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:26.504145 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:26.504159 | orchestrator | 2026-04-09 02:56:26.504172 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-09 02:56:26.504186 | orchestrator | Thursday 09 April 2026 02:56:25 +0000 (0:00:00.661) 0:03:05.489 ******** 2026-04-09 02:56:26.504199 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 02:56:26.504213 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 02:56:26.504226 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 02:56:26.504239 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:26.504252 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:26.504264 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:26.504303 | orchestrator | 2026-04-09 02:56:26.504316 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-09 02:56:26.504330 | orchestrator | Thursday 09 April 2026 02:56:26 +0000 (0:00:00.957) 0:03:06.447 ******** 2026-04-09 02:56:26.504345 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-09 02:56:26.504362 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-09 02:56:26.504395 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-09 02:56:44.898082 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-09 02:56:44.898182 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:56:44.898192 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-09 02:56:44.898220 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-09 02:56:44.898226 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:56:44.898232 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:56:44.898238 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:44.898244 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:44.898250 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:44.898256 | orchestrator | 2026-04-09 02:56:44.898263 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-09 02:56:44.898270 | orchestrator | Thursday 09 April 2026 02:56:27 +0000 (0:00:00.782) 0:03:07.229 ******** 2026-04-09 02:56:44.898276 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:56:44.898281 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:56:44.898287 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:56:44.898293 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:44.898299 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:44.898304 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:44.898310 | orchestrator | 2026-04-09 02:56:44.898316 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-09 02:56:44.898322 | orchestrator | Thursday 09 April 2026 02:56:27 +0000 (0:00:00.944) 0:03:08.173 ******** 2026-04-09 02:56:44.898328 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:56:44.898334 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:56:44.898340 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:56:44.898345 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:44.898351 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:44.898357 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:44.898362 | orchestrator | 2026-04-09 02:56:44.898370 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 02:56:44.898389 | orchestrator | Thursday 09 April 2026 02:56:28 +0000 (0:00:00.893) 0:03:09.067 ******** 2026-04-09 02:56:44.898395 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:56:44.898401 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:56:44.898407 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:56:44.898414 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:44.898420 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:44.898426 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:44.898432 | orchestrator | 2026-04-09 02:56:44.898438 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 02:56:44.898444 | orchestrator | Thursday 09 April 2026 02:56:29 +0000 (0:00:00.730) 0:03:09.797 ******** 2026-04-09 02:56:44.898451 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:56:44.898457 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:56:44.898463 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:56:44.898469 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:44.898473 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:44.898476 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:44.898480 | orchestrator | 2026-04-09 02:56:44.898484 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 02:56:44.898488 | orchestrator | Thursday 09 April 2026 02:56:30 +0000 (0:00:00.940) 0:03:10.738 ******** 2026-04-09 02:56:44.898491 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:56:44.898495 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:56:44.898499 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:56:44.898502 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:44.898506 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:44.898510 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:44.898518 | orchestrator | 2026-04-09 02:56:44.898522 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 02:56:44.898526 | orchestrator | Thursday 09 April 2026 02:56:31 +0000 (0:00:00.696) 0:03:11.434 ******** 2026-04-09 02:56:44.898529 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:56:44.898534 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:56:44.898538 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:56:44.898541 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:44.898545 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:44.898549 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:44.898552 | orchestrator | 2026-04-09 02:56:44.898556 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 02:56:44.898560 | orchestrator | Thursday 09 April 2026 02:56:32 +0000 (0:00:00.929) 0:03:12.363 ******** 2026-04-09 02:56:44.898565 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 02:56:44.898570 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 02:56:44.898574 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 02:56:44.898579 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:56:44.898583 | orchestrator | 2026-04-09 02:56:44.898588 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 02:56:44.898603 | orchestrator | Thursday 09 April 2026 02:56:32 +0000 (0:00:00.465) 0:03:12.829 ******** 2026-04-09 02:56:44.898608 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 02:56:44.898612 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 02:56:44.898617 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 02:56:44.898621 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:56:44.898625 | orchestrator | 2026-04-09 02:56:44.898630 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 02:56:44.898634 | orchestrator | Thursday 09 April 2026 02:56:33 +0000 (0:00:00.467) 0:03:13.297 ******** 2026-04-09 02:56:44.898639 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 02:56:44.898643 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 02:56:44.898648 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 02:56:44.898653 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:56:44.898659 | orchestrator | 2026-04-09 02:56:44.898666 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 02:56:44.898672 | orchestrator | Thursday 09 April 2026 02:56:33 +0000 (0:00:00.470) 0:03:13.767 ******** 2026-04-09 02:56:44.898678 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:56:44.898684 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:56:44.898690 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:56:44.898696 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:44.898702 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:44.898708 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:44.898714 | orchestrator | 2026-04-09 02:56:44.898720 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 02:56:44.898727 | orchestrator | Thursday 09 April 2026 02:56:34 +0000 (0:00:00.706) 0:03:14.473 ******** 2026-04-09 02:56:44.898733 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-09 02:56:44.898740 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-09 02:56:44.898745 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-09 02:56:44.898752 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-09 02:56:44.898759 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:44.898765 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-09 02:56:44.898772 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:56:44.898779 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-09 02:56:44.898784 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:56:44.898789 | orchestrator | 2026-04-09 02:56:44.898793 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-09 02:56:44.898803 | orchestrator | Thursday 09 April 2026 02:56:36 +0000 (0:00:02.081) 0:03:16.554 ******** 2026-04-09 02:56:44.898807 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:56:44.898812 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:56:44.898816 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:56:44.898820 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:56:44.898825 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:56:44.898830 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:56:44.898836 | orchestrator | 2026-04-09 02:56:44.898842 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-09 02:56:44.898849 | orchestrator | Thursday 09 April 2026 02:56:39 +0000 (0:00:02.810) 0:03:19.365 ******** 2026-04-09 02:56:44.898855 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:56:44.898866 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:56:44.898872 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:56:44.898878 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:56:44.898884 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:56:44.898890 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:56:44.898897 | orchestrator | 2026-04-09 02:56:44.898903 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-09 02:56:44.898947 | orchestrator | Thursday 09 April 2026 02:56:40 +0000 (0:00:01.031) 0:03:20.397 ******** 2026-04-09 02:56:44.898955 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:56:44.898961 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:56:44.898966 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:56:44.898974 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:56:44.898980 | orchestrator | 2026-04-09 02:56:44.898985 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-09 02:56:44.898989 | orchestrator | Thursday 09 April 2026 02:56:41 +0000 (0:00:01.169) 0:03:21.567 ******** 2026-04-09 02:56:44.898992 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:56:44.898996 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:56:44.899000 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:56:44.899003 | orchestrator | 2026-04-09 02:56:44.899007 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-09 02:56:44.899011 | orchestrator | Thursday 09 April 2026 02:56:41 +0000 (0:00:00.377) 0:03:21.945 ******** 2026-04-09 02:56:44.899015 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:56:44.899018 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:56:44.899022 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:56:44.899026 | orchestrator | 2026-04-09 02:56:44.899029 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-09 02:56:44.899033 | orchestrator | Thursday 09 April 2026 02:56:43 +0000 (0:00:01.490) 0:03:23.436 ******** 2026-04-09 02:56:44.899037 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 02:56:44.899041 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 02:56:44.899044 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 02:56:44.899048 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:56:44.899052 | orchestrator | 2026-04-09 02:56:44.899055 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-09 02:56:44.899059 | orchestrator | Thursday 09 April 2026 02:56:43 +0000 (0:00:00.752) 0:03:24.188 ******** 2026-04-09 02:56:44.899063 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:56:44.899067 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:56:44.899070 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:56:44.899074 | orchestrator | 2026-04-09 02:56:44.899078 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-09 02:56:44.899082 | orchestrator | Thursday 09 April 2026 02:56:44 +0000 (0:00:00.365) 0:03:24.554 ******** 2026-04-09 02:56:44.899091 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:57:03.228401 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:57:03.228517 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:57:03.228558 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 02:57:03.228571 | orchestrator | 2026-04-09 02:57:03.228583 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-09 02:57:03.228595 | orchestrator | Thursday 09 April 2026 02:56:45 +0000 (0:00:01.334) 0:03:25.889 ******** 2026-04-09 02:57:03.228606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 02:57:03.228617 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 02:57:03.228628 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 02:57:03.228639 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:57:03.228649 | orchestrator | 2026-04-09 02:57:03.228660 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-09 02:57:03.228671 | orchestrator | Thursday 09 April 2026 02:56:46 +0000 (0:00:00.432) 0:03:26.321 ******** 2026-04-09 02:57:03.228682 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:57:03.228693 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:57:03.228703 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:57:03.228714 | orchestrator | 2026-04-09 02:57:03.228724 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-09 02:57:03.228735 | orchestrator | Thursday 09 April 2026 02:56:46 +0000 (0:00:00.438) 0:03:26.760 ******** 2026-04-09 02:57:03.228746 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:57:03.228756 | orchestrator | 2026-04-09 02:57:03.228767 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-09 02:57:03.228778 | orchestrator | Thursday 09 April 2026 02:56:46 +0000 (0:00:00.264) 0:03:27.025 ******** 2026-04-09 02:57:03.228788 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:57:03.228799 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:57:03.228810 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:57:03.228820 | orchestrator | 2026-04-09 02:57:03.228831 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-09 02:57:03.228841 | orchestrator | Thursday 09 April 2026 02:56:47 +0000 (0:00:00.584) 0:03:27.609 ******** 2026-04-09 02:57:03.228903 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:57:03.228915 | orchestrator | 2026-04-09 02:57:03.228925 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-09 02:57:03.228939 | orchestrator | Thursday 09 April 2026 02:56:47 +0000 (0:00:00.251) 0:03:27.861 ******** 2026-04-09 02:57:03.228951 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:57:03.228964 | orchestrator | 2026-04-09 02:57:03.228977 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-09 02:57:03.228989 | orchestrator | Thursday 09 April 2026 02:56:47 +0000 (0:00:00.257) 0:03:28.119 ******** 2026-04-09 02:57:03.229002 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:57:03.229015 | orchestrator | 2026-04-09 02:57:03.229028 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-09 02:57:03.229057 | orchestrator | Thursday 09 April 2026 02:56:48 +0000 (0:00:00.150) 0:03:28.270 ******** 2026-04-09 02:57:03.229098 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:57:03.229111 | orchestrator | 2026-04-09 02:57:03.229124 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-09 02:57:03.229137 | orchestrator | Thursday 09 April 2026 02:56:48 +0000 (0:00:00.264) 0:03:28.535 ******** 2026-04-09 02:57:03.229150 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:57:03.229164 | orchestrator | 2026-04-09 02:57:03.229177 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-09 02:57:03.229189 | orchestrator | Thursday 09 April 2026 02:56:48 +0000 (0:00:00.253) 0:03:28.788 ******** 2026-04-09 02:57:03.229202 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 02:57:03.229215 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 02:57:03.229228 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 02:57:03.229249 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:57:03.229262 | orchestrator | 2026-04-09 02:57:03.229276 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-09 02:57:03.229289 | orchestrator | Thursday 09 April 2026 02:56:49 +0000 (0:00:00.465) 0:03:29.254 ******** 2026-04-09 02:57:03.229301 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:57:03.229311 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:57:03.229322 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:57:03.229333 | orchestrator | 2026-04-09 02:57:03.229344 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-09 02:57:03.229355 | orchestrator | Thursday 09 April 2026 02:56:49 +0000 (0:00:00.375) 0:03:29.629 ******** 2026-04-09 02:57:03.229365 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:57:03.229376 | orchestrator | 2026-04-09 02:57:03.229387 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-09 02:57:03.229398 | orchestrator | Thursday 09 April 2026 02:56:49 +0000 (0:00:00.252) 0:03:29.882 ******** 2026-04-09 02:57:03.229408 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:57:03.229419 | orchestrator | 2026-04-09 02:57:03.229429 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-09 02:57:03.229440 | orchestrator | Thursday 09 April 2026 02:56:49 +0000 (0:00:00.256) 0:03:30.138 ******** 2026-04-09 02:57:03.229451 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:57:03.229462 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:57:03.229473 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:57:03.229484 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 02:57:03.229495 | orchestrator | 2026-04-09 02:57:03.229505 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-09 02:57:03.229516 | orchestrator | Thursday 09 April 2026 02:56:51 +0000 (0:00:01.169) 0:03:31.307 ******** 2026-04-09 02:57:03.229527 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:57:03.229539 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:57:03.229550 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:57:03.229561 | orchestrator | 2026-04-09 02:57:03.229590 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-09 02:57:03.229601 | orchestrator | Thursday 09 April 2026 02:56:51 +0000 (0:00:00.360) 0:03:31.668 ******** 2026-04-09 02:57:03.229706 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:57:03.229731 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:57:03.229745 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:57:03.229756 | orchestrator | 2026-04-09 02:57:03.229767 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-09 02:57:03.229778 | orchestrator | Thursday 09 April 2026 02:56:53 +0000 (0:00:01.610) 0:03:33.279 ******** 2026-04-09 02:57:03.229789 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 02:57:03.229799 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 02:57:03.229810 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 02:57:03.229821 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:57:03.229831 | orchestrator | 2026-04-09 02:57:03.229842 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-09 02:57:03.229880 | orchestrator | Thursday 09 April 2026 02:56:53 +0000 (0:00:00.701) 0:03:33.980 ******** 2026-04-09 02:57:03.229894 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:57:03.229913 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:57:03.229930 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:57:03.229947 | orchestrator | 2026-04-09 02:57:03.229964 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-09 02:57:03.229981 | orchestrator | Thursday 09 April 2026 02:56:54 +0000 (0:00:00.346) 0:03:34.326 ******** 2026-04-09 02:57:03.229999 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:57:03.230096 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:57:03.230112 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:57:03.230134 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 02:57:03.230145 | orchestrator | 2026-04-09 02:57:03.230156 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-09 02:57:03.230166 | orchestrator | Thursday 09 April 2026 02:56:55 +0000 (0:00:01.168) 0:03:35.495 ******** 2026-04-09 02:57:03.230177 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:57:03.230225 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:57:03.230237 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:57:03.230248 | orchestrator | 2026-04-09 02:57:03.230259 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-09 02:57:03.230270 | orchestrator | Thursday 09 April 2026 02:56:55 +0000 (0:00:00.383) 0:03:35.878 ******** 2026-04-09 02:57:03.230280 | orchestrator | changed: [testbed-node-3] 2026-04-09 02:57:03.230291 | orchestrator | changed: [testbed-node-4] 2026-04-09 02:57:03.230302 | orchestrator | changed: [testbed-node-5] 2026-04-09 02:57:03.230313 | orchestrator | 2026-04-09 02:57:03.230324 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-09 02:57:03.230334 | orchestrator | Thursday 09 April 2026 02:56:56 +0000 (0:00:01.236) 0:03:37.115 ******** 2026-04-09 02:57:03.230345 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 02:57:03.230364 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 02:57:03.230375 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 02:57:03.230386 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:57:03.230396 | orchestrator | 2026-04-09 02:57:03.230407 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-09 02:57:03.230418 | orchestrator | Thursday 09 April 2026 02:56:57 +0000 (0:00:00.971) 0:03:38.086 ******** 2026-04-09 02:57:03.230429 | orchestrator | ok: [testbed-node-3] 2026-04-09 02:57:03.230439 | orchestrator | ok: [testbed-node-4] 2026-04-09 02:57:03.230450 | orchestrator | ok: [testbed-node-5] 2026-04-09 02:57:03.230461 | orchestrator | 2026-04-09 02:57:03.230472 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-09 02:57:03.230483 | orchestrator | Thursday 09 April 2026 02:56:58 +0000 (0:00:00.655) 0:03:38.741 ******** 2026-04-09 02:57:03.230493 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:57:03.230504 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:57:03.230515 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:57:03.230525 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:57:03.230536 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:57:03.230546 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:57:03.230557 | orchestrator | 2026-04-09 02:57:03.230568 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-09 02:57:03.230579 | orchestrator | Thursday 09 April 2026 02:56:59 +0000 (0:00:00.748) 0:03:39.489 ******** 2026-04-09 02:57:03.230589 | orchestrator | skipping: [testbed-node-3] 2026-04-09 02:57:03.230600 | orchestrator | skipping: [testbed-node-4] 2026-04-09 02:57:03.230611 | orchestrator | skipping: [testbed-node-5] 2026-04-09 02:57:03.230621 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:57:03.230632 | orchestrator | 2026-04-09 02:57:03.230643 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-09 02:57:03.230653 | orchestrator | Thursday 09 April 2026 02:57:00 +0000 (0:00:01.143) 0:03:40.633 ******** 2026-04-09 02:57:03.230664 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:57:03.230675 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:57:03.230685 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:57:03.230696 | orchestrator | 2026-04-09 02:57:03.230707 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-09 02:57:03.230718 | orchestrator | Thursday 09 April 2026 02:57:00 +0000 (0:00:00.406) 0:03:41.039 ******** 2026-04-09 02:57:03.230728 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:57:03.230748 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:57:03.230759 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:57:03.230770 | orchestrator | 2026-04-09 02:57:03.230780 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-09 02:57:03.230791 | orchestrator | Thursday 09 April 2026 02:57:01 +0000 (0:00:01.162) 0:03:42.201 ******** 2026-04-09 02:57:03.230802 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 02:57:03.230826 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 02:57:21.097158 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 02:57:21.097300 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:57:21.097326 | orchestrator | 2026-04-09 02:57:21.097346 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-09 02:57:21.097367 | orchestrator | Thursday 09 April 2026 02:57:03 +0000 (0:00:01.214) 0:03:43.415 ******** 2026-04-09 02:57:21.097387 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:57:21.097406 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:57:21.097424 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:57:21.097443 | orchestrator | 2026-04-09 02:57:21.097462 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-09 02:57:21.097480 | orchestrator | 2026-04-09 02:57:21.097523 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 02:57:21.097544 | orchestrator | Thursday 09 April 2026 02:57:03 +0000 (0:00:00.677) 0:03:44.093 ******** 2026-04-09 02:57:21.097580 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:57:21.097601 | orchestrator | 2026-04-09 02:57:21.097619 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 02:57:21.097638 | orchestrator | Thursday 09 April 2026 02:57:04 +0000 (0:00:00.828) 0:03:44.922 ******** 2026-04-09 02:57:21.097656 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:57:21.097676 | orchestrator | 2026-04-09 02:57:21.097696 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 02:57:21.097716 | orchestrator | Thursday 09 April 2026 02:57:05 +0000 (0:00:00.635) 0:03:45.558 ******** 2026-04-09 02:57:21.097736 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:57:21.097757 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:57:21.097778 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:57:21.097827 | orchestrator | 2026-04-09 02:57:21.097847 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 02:57:21.097867 | orchestrator | Thursday 09 April 2026 02:57:06 +0000 (0:00:00.729) 0:03:46.287 ******** 2026-04-09 02:57:21.097887 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:57:21.097907 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:57:21.097927 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:57:21.097947 | orchestrator | 2026-04-09 02:57:21.097967 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 02:57:21.097986 | orchestrator | Thursday 09 April 2026 02:57:06 +0000 (0:00:00.627) 0:03:46.915 ******** 2026-04-09 02:57:21.098003 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:57:21.098107 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:57:21.098126 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:57:21.098143 | orchestrator | 2026-04-09 02:57:21.098161 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 02:57:21.098224 | orchestrator | Thursday 09 April 2026 02:57:07 +0000 (0:00:00.386) 0:03:47.302 ******** 2026-04-09 02:57:21.098244 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:57:21.098262 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:57:21.098302 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:57:21.098323 | orchestrator | 2026-04-09 02:57:21.098341 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 02:57:21.098360 | orchestrator | Thursday 09 April 2026 02:57:07 +0000 (0:00:00.384) 0:03:47.686 ******** 2026-04-09 02:57:21.098408 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:57:21.098427 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:57:21.098445 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:57:21.098463 | orchestrator | 2026-04-09 02:57:21.098481 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 02:57:21.098500 | orchestrator | Thursday 09 April 2026 02:57:08 +0000 (0:00:00.716) 0:03:48.402 ******** 2026-04-09 02:57:21.098517 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:57:21.098535 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:57:21.098552 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:57:21.098570 | orchestrator | 2026-04-09 02:57:21.098587 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 02:57:21.098604 | orchestrator | Thursday 09 April 2026 02:57:08 +0000 (0:00:00.625) 0:03:49.027 ******** 2026-04-09 02:57:21.098620 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:57:21.098637 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:57:21.098654 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:57:21.098672 | orchestrator | 2026-04-09 02:57:21.098690 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 02:57:21.098708 | orchestrator | Thursday 09 April 2026 02:57:09 +0000 (0:00:00.346) 0:03:49.373 ******** 2026-04-09 02:57:21.098726 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:57:21.098744 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:57:21.098761 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:57:21.098778 | orchestrator | 2026-04-09 02:57:21.098928 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 02:57:21.098952 | orchestrator | Thursday 09 April 2026 02:57:09 +0000 (0:00:00.706) 0:03:50.080 ******** 2026-04-09 02:57:21.098970 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:57:21.098989 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:57:21.099007 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:57:21.099025 | orchestrator | 2026-04-09 02:57:21.099043 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 02:57:21.099062 | orchestrator | Thursday 09 April 2026 02:57:10 +0000 (0:00:00.710) 0:03:50.791 ******** 2026-04-09 02:57:21.099080 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:57:21.099099 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:57:21.099117 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:57:21.099135 | orchestrator | 2026-04-09 02:57:21.099153 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 02:57:21.099172 | orchestrator | Thursday 09 April 2026 02:57:11 +0000 (0:00:00.698) 0:03:51.489 ******** 2026-04-09 02:57:21.099190 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:57:21.099209 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:57:21.099228 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:57:21.099246 | orchestrator | 2026-04-09 02:57:21.099265 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 02:57:21.099314 | orchestrator | Thursday 09 April 2026 02:57:11 +0000 (0:00:00.387) 0:03:51.876 ******** 2026-04-09 02:57:21.099332 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:57:21.099351 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:57:21.099369 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:57:21.099387 | orchestrator | 2026-04-09 02:57:21.099406 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 02:57:21.099424 | orchestrator | Thursday 09 April 2026 02:57:12 +0000 (0:00:00.344) 0:03:52.220 ******** 2026-04-09 02:57:21.099441 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:57:21.099457 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:57:21.099473 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:57:21.099490 | orchestrator | 2026-04-09 02:57:21.099506 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 02:57:21.099522 | orchestrator | Thursday 09 April 2026 02:57:12 +0000 (0:00:00.312) 0:03:52.533 ******** 2026-04-09 02:57:21.099539 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:57:21.099569 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:57:21.099586 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:57:21.099602 | orchestrator | 2026-04-09 02:57:21.099618 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 02:57:21.099634 | orchestrator | Thursday 09 April 2026 02:57:12 +0000 (0:00:00.656) 0:03:53.189 ******** 2026-04-09 02:57:21.099650 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:57:21.099666 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:57:21.099683 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:57:21.099699 | orchestrator | 2026-04-09 02:57:21.099715 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 02:57:21.099731 | orchestrator | Thursday 09 April 2026 02:57:13 +0000 (0:00:00.364) 0:03:53.554 ******** 2026-04-09 02:57:21.099748 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:57:21.099765 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:57:21.099781 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:57:21.099823 | orchestrator | 2026-04-09 02:57:21.099839 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 02:57:21.099856 | orchestrator | Thursday 09 April 2026 02:57:13 +0000 (0:00:00.330) 0:03:53.885 ******** 2026-04-09 02:57:21.099872 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:57:21.099888 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:57:21.099904 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:57:21.099920 | orchestrator | 2026-04-09 02:57:21.099937 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 02:57:21.099953 | orchestrator | Thursday 09 April 2026 02:57:14 +0000 (0:00:00.380) 0:03:54.266 ******** 2026-04-09 02:57:21.099969 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:57:21.099986 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:57:21.100002 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:57:21.100018 | orchestrator | 2026-04-09 02:57:21.100035 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 02:57:21.100051 | orchestrator | Thursday 09 April 2026 02:57:14 +0000 (0:00:00.673) 0:03:54.939 ******** 2026-04-09 02:57:21.100068 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:57:21.100084 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:57:21.100100 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:57:21.100117 | orchestrator | 2026-04-09 02:57:21.100142 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-09 02:57:21.100159 | orchestrator | Thursday 09 April 2026 02:57:15 +0000 (0:00:00.712) 0:03:55.652 ******** 2026-04-09 02:57:21.100175 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:57:21.100191 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:57:21.100207 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:57:21.100224 | orchestrator | 2026-04-09 02:57:21.100240 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-09 02:57:21.100257 | orchestrator | Thursday 09 April 2026 02:57:15 +0000 (0:00:00.376) 0:03:56.028 ******** 2026-04-09 02:57:21.100274 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:57:21.100291 | orchestrator | 2026-04-09 02:57:21.100307 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-09 02:57:21.100324 | orchestrator | Thursday 09 April 2026 02:57:16 +0000 (0:00:00.938) 0:03:56.966 ******** 2026-04-09 02:57:21.100340 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:57:21.100357 | orchestrator | 2026-04-09 02:57:21.100373 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-09 02:57:21.100389 | orchestrator | Thursday 09 April 2026 02:57:16 +0000 (0:00:00.162) 0:03:57.128 ******** 2026-04-09 02:57:21.100405 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-09 02:57:21.100422 | orchestrator | 2026-04-09 02:57:21.100438 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-09 02:57:21.100454 | orchestrator | Thursday 09 April 2026 02:57:17 +0000 (0:00:01.050) 0:03:58.179 ******** 2026-04-09 02:57:21.100479 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:57:21.100496 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:57:21.100512 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:57:21.100528 | orchestrator | 2026-04-09 02:57:21.100544 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-09 02:57:21.100560 | orchestrator | Thursday 09 April 2026 02:57:18 +0000 (0:00:00.367) 0:03:58.547 ******** 2026-04-09 02:57:21.100577 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:57:21.100593 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:57:21.100609 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:57:21.100625 | orchestrator | 2026-04-09 02:57:21.100641 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-09 02:57:21.100657 | orchestrator | Thursday 09 April 2026 02:57:19 +0000 (0:00:00.659) 0:03:59.206 ******** 2026-04-09 02:57:21.100674 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:57:21.100690 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:57:21.100706 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:57:21.100723 | orchestrator | 2026-04-09 02:57:21.100738 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-09 02:57:21.100755 | orchestrator | Thursday 09 April 2026 02:57:20 +0000 (0:00:01.240) 0:04:00.447 ******** 2026-04-09 02:57:21.100771 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:57:21.100787 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:57:21.100827 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:57:21.100842 | orchestrator | 2026-04-09 02:57:21.100868 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-09 02:58:31.948183 | orchestrator | Thursday 09 April 2026 02:57:21 +0000 (0:00:00.840) 0:04:01.287 ******** 2026-04-09 02:58:31.948302 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:58:31.948321 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:58:31.948333 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:58:31.948345 | orchestrator | 2026-04-09 02:58:31.948357 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-09 02:58:31.948370 | orchestrator | Thursday 09 April 2026 02:57:21 +0000 (0:00:00.734) 0:04:02.021 ******** 2026-04-09 02:58:31.948382 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:58:31.948396 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:58:31.948408 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:58:31.948420 | orchestrator | 2026-04-09 02:58:31.948432 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-09 02:58:31.948444 | orchestrator | Thursday 09 April 2026 02:57:22 +0000 (0:00:01.133) 0:04:03.155 ******** 2026-04-09 02:58:31.948456 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:58:31.948468 | orchestrator | 2026-04-09 02:58:31.948480 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-09 02:58:31.948493 | orchestrator | Thursday 09 April 2026 02:57:24 +0000 (0:00:01.284) 0:04:04.439 ******** 2026-04-09 02:58:31.948504 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:58:31.948516 | orchestrator | 2026-04-09 02:58:31.948528 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-09 02:58:31.948540 | orchestrator | Thursday 09 April 2026 02:57:24 +0000 (0:00:00.757) 0:04:05.197 ******** 2026-04-09 02:58:31.948552 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 02:58:31.948563 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 02:58:31.948672 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 02:58:31.948694 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 02:58:31.948707 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-09 02:58:31.948720 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 02:58:31.948733 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 02:58:31.948745 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-09 02:58:31.948758 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 02:58:31.948800 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-09 02:58:31.948814 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-09 02:58:31.948827 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-09 02:58:31.948839 | orchestrator | 2026-04-09 02:58:31.948852 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-09 02:58:31.948864 | orchestrator | Thursday 09 April 2026 02:57:28 +0000 (0:00:03.190) 0:04:08.388 ******** 2026-04-09 02:58:31.948876 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:58:31.948888 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:58:31.948914 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:58:31.948927 | orchestrator | 2026-04-09 02:58:31.948938 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-09 02:58:31.948949 | orchestrator | Thursday 09 April 2026 02:57:29 +0000 (0:00:01.203) 0:04:09.591 ******** 2026-04-09 02:58:31.948962 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:58:31.948975 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:58:31.948986 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:58:31.948997 | orchestrator | 2026-04-09 02:58:31.949009 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-09 02:58:31.949022 | orchestrator | Thursday 09 April 2026 02:57:30 +0000 (0:00:00.674) 0:04:10.266 ******** 2026-04-09 02:58:31.949034 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:58:31.949045 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:58:31.949055 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:58:31.949065 | orchestrator | 2026-04-09 02:58:31.949075 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-09 02:58:31.949087 | orchestrator | Thursday 09 April 2026 02:57:30 +0000 (0:00:00.360) 0:04:10.626 ******** 2026-04-09 02:58:31.949098 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:58:31.949110 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:58:31.949121 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:58:31.949133 | orchestrator | 2026-04-09 02:58:31.949146 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-09 02:58:31.949158 | orchestrator | Thursday 09 April 2026 02:57:31 +0000 (0:00:01.453) 0:04:12.080 ******** 2026-04-09 02:58:31.949169 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:58:31.949182 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:58:31.949194 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:58:31.949205 | orchestrator | 2026-04-09 02:58:31.949217 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-09 02:58:31.949228 | orchestrator | Thursday 09 April 2026 02:57:33 +0000 (0:00:01.233) 0:04:13.313 ******** 2026-04-09 02:58:31.949240 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:58:31.949251 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:58:31.949263 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:58:31.949276 | orchestrator | 2026-04-09 02:58:31.949288 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-09 02:58:31.949298 | orchestrator | Thursday 09 April 2026 02:57:33 +0000 (0:00:00.695) 0:04:14.009 ******** 2026-04-09 02:58:31.949308 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:58:31.949319 | orchestrator | 2026-04-09 02:58:31.949330 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-09 02:58:31.949341 | orchestrator | Thursday 09 April 2026 02:57:34 +0000 (0:00:00.628) 0:04:14.637 ******** 2026-04-09 02:58:31.949352 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:58:31.949364 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:58:31.949376 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:58:31.949389 | orchestrator | 2026-04-09 02:58:31.949400 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-09 02:58:31.949436 | orchestrator | Thursday 09 April 2026 02:57:34 +0000 (0:00:00.362) 0:04:15.000 ******** 2026-04-09 02:58:31.949447 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:58:31.949488 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:58:31.949500 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:58:31.949511 | orchestrator | 2026-04-09 02:58:31.949523 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-09 02:58:31.949533 | orchestrator | Thursday 09 April 2026 02:57:35 +0000 (0:00:00.631) 0:04:15.631 ******** 2026-04-09 02:58:31.949545 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:58:31.949558 | orchestrator | 2026-04-09 02:58:31.949571 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-09 02:58:31.949606 | orchestrator | Thursday 09 April 2026 02:57:36 +0000 (0:00:00.626) 0:04:16.257 ******** 2026-04-09 02:58:31.949618 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:58:31.949630 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:58:31.949642 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:58:31.949654 | orchestrator | 2026-04-09 02:58:31.949666 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-09 02:58:31.949677 | orchestrator | Thursday 09 April 2026 02:57:37 +0000 (0:00:01.861) 0:04:18.119 ******** 2026-04-09 02:58:31.949689 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:58:31.949701 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:58:31.949713 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:58:31.949724 | orchestrator | 2026-04-09 02:58:31.949734 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-09 02:58:31.949745 | orchestrator | Thursday 09 April 2026 02:57:39 +0000 (0:00:01.482) 0:04:19.602 ******** 2026-04-09 02:58:31.949757 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:58:31.949769 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:58:31.949781 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:58:31.949792 | orchestrator | 2026-04-09 02:58:31.949804 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-09 02:58:31.949815 | orchestrator | Thursday 09 April 2026 02:57:41 +0000 (0:00:01.828) 0:04:21.430 ******** 2026-04-09 02:58:31.949825 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:58:31.949836 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:58:31.949846 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:58:31.949859 | orchestrator | 2026-04-09 02:58:31.949871 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-09 02:58:31.949883 | orchestrator | Thursday 09 April 2026 02:57:44 +0000 (0:00:03.174) 0:04:24.604 ******** 2026-04-09 02:58:31.949895 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:58:31.949906 | orchestrator | 2026-04-09 02:58:31.949917 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-09 02:58:31.949929 | orchestrator | Thursday 09 April 2026 02:57:45 +0000 (0:00:01.017) 0:04:25.621 ******** 2026-04-09 02:58:31.949951 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-09 02:58:31.949963 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:58:31.949975 | orchestrator | 2026-04-09 02:58:31.949986 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-09 02:58:31.949997 | orchestrator | Thursday 09 April 2026 02:58:07 +0000 (0:00:21.944) 0:04:47.566 ******** 2026-04-09 02:58:31.950008 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:58:31.950094 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:58:31.950109 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:58:31.950121 | orchestrator | 2026-04-09 02:58:31.950134 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-09 02:58:31.950147 | orchestrator | Thursday 09 April 2026 02:58:16 +0000 (0:00:09.318) 0:04:56.884 ******** 2026-04-09 02:58:31.950159 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:58:31.950173 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:58:31.950186 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:58:31.950211 | orchestrator | 2026-04-09 02:58:31.950223 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-09 02:58:31.950236 | orchestrator | Thursday 09 April 2026 02:58:17 +0000 (0:00:00.344) 0:04:57.229 ******** 2026-04-09 02:58:31.950253 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d73ecf0446fe4f89b91d9d0860478ce77e499d5'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-09 02:58:31.950270 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d73ecf0446fe4f89b91d9d0860478ce77e499d5'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-09 02:58:31.950284 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d73ecf0446fe4f89b91d9d0860478ce77e499d5'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-09 02:58:31.950314 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d73ecf0446fe4f89b91d9d0860478ce77e499d5'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-09 02:58:46.798365 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d73ecf0446fe4f89b91d9d0860478ce77e499d5'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-09 02:58:46.798510 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d73ecf0446fe4f89b91d9d0860478ce77e499d5'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__2d73ecf0446fe4f89b91d9d0860478ce77e499d5'}])  2026-04-09 02:58:46.798620 | orchestrator | 2026-04-09 02:58:46.798646 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-09 02:58:46.798669 | orchestrator | Thursday 09 April 2026 02:58:31 +0000 (0:00:14.909) 0:05:12.138 ******** 2026-04-09 02:58:46.798688 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:58:46.798708 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:58:46.798726 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:58:46.798744 | orchestrator | 2026-04-09 02:58:46.798763 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-09 02:58:46.798782 | orchestrator | Thursday 09 April 2026 02:58:32 +0000 (0:00:00.390) 0:05:12.529 ******** 2026-04-09 02:58:46.798803 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:58:46.798823 | orchestrator | 2026-04-09 02:58:46.798843 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-09 02:58:46.798863 | orchestrator | Thursday 09 April 2026 02:58:33 +0000 (0:00:00.887) 0:05:13.417 ******** 2026-04-09 02:58:46.798886 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:58:46.798909 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:58:46.798930 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:58:46.798952 | orchestrator | 2026-04-09 02:58:46.799011 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-09 02:58:46.799054 | orchestrator | Thursday 09 April 2026 02:58:33 +0000 (0:00:00.386) 0:05:13.803 ******** 2026-04-09 02:58:46.799077 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:58:46.799101 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:58:46.799123 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:58:46.799145 | orchestrator | 2026-04-09 02:58:46.799167 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-09 02:58:46.799189 | orchestrator | Thursday 09 April 2026 02:58:33 +0000 (0:00:00.371) 0:05:14.175 ******** 2026-04-09 02:58:46.799211 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 02:58:46.799234 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 02:58:46.799254 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 02:58:46.799274 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:58:46.799292 | orchestrator | 2026-04-09 02:58:46.799311 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-09 02:58:46.799329 | orchestrator | Thursday 09 April 2026 02:58:35 +0000 (0:00:01.032) 0:05:15.208 ******** 2026-04-09 02:58:46.799345 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:58:46.799363 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:58:46.799383 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:58:46.799401 | orchestrator | 2026-04-09 02:58:46.799417 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-09 02:58:46.799433 | orchestrator | 2026-04-09 02:58:46.799449 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 02:58:46.799464 | orchestrator | Thursday 09 April 2026 02:58:35 +0000 (0:00:00.982) 0:05:16.190 ******** 2026-04-09 02:58:46.799482 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:58:46.799500 | orchestrator | 2026-04-09 02:58:46.799516 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 02:58:46.799614 | orchestrator | Thursday 09 April 2026 02:58:36 +0000 (0:00:00.601) 0:05:16.792 ******** 2026-04-09 02:58:46.799644 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:58:46.799662 | orchestrator | 2026-04-09 02:58:46.799680 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 02:58:46.799698 | orchestrator | Thursday 09 April 2026 02:58:37 +0000 (0:00:00.847) 0:05:17.640 ******** 2026-04-09 02:58:46.799716 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:58:46.799734 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:58:46.799752 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:58:46.799769 | orchestrator | 2026-04-09 02:58:46.799788 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 02:58:46.799806 | orchestrator | Thursday 09 April 2026 02:58:38 +0000 (0:00:00.785) 0:05:18.425 ******** 2026-04-09 02:58:46.799824 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:58:46.799843 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:58:46.799861 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:58:46.799879 | orchestrator | 2026-04-09 02:58:46.799897 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 02:58:46.799915 | orchestrator | Thursday 09 April 2026 02:58:38 +0000 (0:00:00.362) 0:05:18.788 ******** 2026-04-09 02:58:46.799934 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:58:46.799953 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:58:46.799975 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:58:46.799992 | orchestrator | 2026-04-09 02:58:46.800041 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 02:58:46.800061 | orchestrator | Thursday 09 April 2026 02:58:39 +0000 (0:00:00.637) 0:05:19.425 ******** 2026-04-09 02:58:46.800079 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:58:46.800098 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:58:46.800134 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:58:46.800151 | orchestrator | 2026-04-09 02:58:46.800167 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 02:58:46.800182 | orchestrator | Thursday 09 April 2026 02:58:39 +0000 (0:00:00.356) 0:05:19.782 ******** 2026-04-09 02:58:46.800197 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:58:46.800212 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:58:46.800227 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:58:46.800243 | orchestrator | 2026-04-09 02:58:46.800258 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 02:58:46.800273 | orchestrator | Thursday 09 April 2026 02:58:40 +0000 (0:00:00.775) 0:05:20.557 ******** 2026-04-09 02:58:46.800288 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:58:46.800303 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:58:46.800319 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:58:46.800334 | orchestrator | 2026-04-09 02:58:46.800351 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 02:58:46.800367 | orchestrator | Thursday 09 April 2026 02:58:40 +0000 (0:00:00.363) 0:05:20.920 ******** 2026-04-09 02:58:46.800382 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:58:46.800399 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:58:46.800415 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:58:46.800430 | orchestrator | 2026-04-09 02:58:46.800447 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 02:58:46.800462 | orchestrator | Thursday 09 April 2026 02:58:41 +0000 (0:00:00.751) 0:05:21.672 ******** 2026-04-09 02:58:46.800478 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:58:46.800493 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:58:46.800510 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:58:46.800525 | orchestrator | 2026-04-09 02:58:46.800574 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 02:58:46.800591 | orchestrator | Thursday 09 April 2026 02:58:42 +0000 (0:00:00.787) 0:05:22.460 ******** 2026-04-09 02:58:46.800608 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:58:46.800623 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:58:46.800637 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:58:46.800653 | orchestrator | 2026-04-09 02:58:46.800668 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 02:58:46.800683 | orchestrator | Thursday 09 April 2026 02:58:43 +0000 (0:00:00.774) 0:05:23.235 ******** 2026-04-09 02:58:46.800699 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:58:46.800727 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:58:46.800742 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:58:46.800757 | orchestrator | 2026-04-09 02:58:46.800774 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 02:58:46.800790 | orchestrator | Thursday 09 April 2026 02:58:43 +0000 (0:00:00.325) 0:05:23.560 ******** 2026-04-09 02:58:46.800805 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:58:46.800822 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:58:46.800838 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:58:46.800854 | orchestrator | 2026-04-09 02:58:46.800871 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 02:58:46.800887 | orchestrator | Thursday 09 April 2026 02:58:44 +0000 (0:00:00.660) 0:05:24.221 ******** 2026-04-09 02:58:46.800902 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:58:46.800918 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:58:46.800933 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:58:46.800950 | orchestrator | 2026-04-09 02:58:46.800965 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 02:58:46.800981 | orchestrator | Thursday 09 April 2026 02:58:44 +0000 (0:00:00.353) 0:05:24.575 ******** 2026-04-09 02:58:46.800996 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:58:46.801012 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:58:46.801027 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:58:46.801057 | orchestrator | 2026-04-09 02:58:46.801073 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 02:58:46.801088 | orchestrator | Thursday 09 April 2026 02:58:44 +0000 (0:00:00.357) 0:05:24.932 ******** 2026-04-09 02:58:46.801104 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:58:46.801120 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:58:46.801135 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:58:46.801152 | orchestrator | 2026-04-09 02:58:46.801168 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 02:58:46.801184 | orchestrator | Thursday 09 April 2026 02:58:45 +0000 (0:00:00.352) 0:05:25.284 ******** 2026-04-09 02:58:46.801201 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:58:46.801216 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:58:46.801233 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:58:46.801250 | orchestrator | 2026-04-09 02:58:46.801266 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 02:58:46.801282 | orchestrator | Thursday 09 April 2026 02:58:45 +0000 (0:00:00.615) 0:05:25.899 ******** 2026-04-09 02:58:46.801299 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:58:46.801316 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:58:46.801332 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:58:46.801347 | orchestrator | 2026-04-09 02:58:46.801363 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 02:58:46.801383 | orchestrator | Thursday 09 April 2026 02:58:46 +0000 (0:00:00.364) 0:05:26.264 ******** 2026-04-09 02:58:46.801402 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:58:46.801420 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:58:46.801437 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:58:46.801454 | orchestrator | 2026-04-09 02:58:46.801471 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 02:58:46.801488 | orchestrator | Thursday 09 April 2026 02:58:46 +0000 (0:00:00.373) 0:05:26.637 ******** 2026-04-09 02:58:46.801505 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:58:46.801521 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:58:46.801562 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:58:46.801580 | orchestrator | 2026-04-09 02:58:46.801596 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 02:58:46.801631 | orchestrator | Thursday 09 April 2026 02:58:46 +0000 (0:00:00.354) 0:05:26.991 ******** 2026-04-09 02:59:49.164219 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:59:49.164315 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:59:49.164322 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:59:49.164326 | orchestrator | 2026-04-09 02:59:49.164331 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-09 02:59:49.164337 | orchestrator | Thursday 09 April 2026 02:58:47 +0000 (0:00:00.922) 0:05:27.914 ******** 2026-04-09 02:59:49.164341 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 02:59:49.164346 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 02:59:49.164350 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 02:59:49.164354 | orchestrator | 2026-04-09 02:59:49.164358 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-09 02:59:49.164361 | orchestrator | Thursday 09 April 2026 02:58:48 +0000 (0:00:00.776) 0:05:28.690 ******** 2026-04-09 02:59:49.164365 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:59:49.164370 | orchestrator | 2026-04-09 02:59:49.164373 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-09 02:59:49.164377 | orchestrator | Thursday 09 April 2026 02:58:49 +0000 (0:00:00.840) 0:05:29.531 ******** 2026-04-09 02:59:49.164427 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:59:49.164434 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:59:49.164445 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:59:49.164454 | orchestrator | 2026-04-09 02:59:49.164480 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-09 02:59:49.164487 | orchestrator | Thursday 09 April 2026 02:58:50 +0000 (0:00:00.805) 0:05:30.336 ******** 2026-04-09 02:59:49.164493 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:59:49.164498 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:59:49.164504 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:59:49.164510 | orchestrator | 2026-04-09 02:59:49.164516 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-09 02:59:49.164522 | orchestrator | Thursday 09 April 2026 02:58:50 +0000 (0:00:00.383) 0:05:30.720 ******** 2026-04-09 02:59:49.164528 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 02:59:49.164535 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 02:59:49.164540 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 02:59:49.164548 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-09 02:59:49.164554 | orchestrator | 2026-04-09 02:59:49.164573 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-09 02:59:49.164578 | orchestrator | Thursday 09 April 2026 02:59:01 +0000 (0:00:10.773) 0:05:41.493 ******** 2026-04-09 02:59:49.164581 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:59:49.164585 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:59:49.164589 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:59:49.164593 | orchestrator | 2026-04-09 02:59:49.164596 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-09 02:59:49.164600 | orchestrator | Thursday 09 April 2026 02:59:01 +0000 (0:00:00.375) 0:05:41.869 ******** 2026-04-09 02:59:49.164604 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-09 02:59:49.164608 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-09 02:59:49.164611 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-09 02:59:49.164615 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-09 02:59:49.164619 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 02:59:49.164623 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 02:59:49.164627 | orchestrator | 2026-04-09 02:59:49.164630 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-09 02:59:49.164634 | orchestrator | Thursday 09 April 2026 02:59:04 +0000 (0:00:02.528) 0:05:44.397 ******** 2026-04-09 02:59:49.164638 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-09 02:59:49.164642 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-09 02:59:49.164645 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-09 02:59:49.164649 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 02:59:49.164653 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-09 02:59:49.164656 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-09 02:59:49.164660 | orchestrator | 2026-04-09 02:59:49.164664 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-09 02:59:49.164668 | orchestrator | Thursday 09 April 2026 02:59:05 +0000 (0:00:01.255) 0:05:45.653 ******** 2026-04-09 02:59:49.164671 | orchestrator | ok: [testbed-node-0] 2026-04-09 02:59:49.164675 | orchestrator | ok: [testbed-node-1] 2026-04-09 02:59:49.164679 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:59:49.164683 | orchestrator | 2026-04-09 02:59:49.164686 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-09 02:59:49.164690 | orchestrator | Thursday 09 April 2026 02:59:06 +0000 (0:00:00.799) 0:05:46.452 ******** 2026-04-09 02:59:49.164694 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:59:49.164697 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:59:49.164701 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:59:49.164705 | orchestrator | 2026-04-09 02:59:49.164708 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-09 02:59:49.164712 | orchestrator | Thursday 09 April 2026 02:59:06 +0000 (0:00:00.350) 0:05:46.803 ******** 2026-04-09 02:59:49.164720 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:59:49.164724 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:59:49.164728 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:59:49.164732 | orchestrator | 2026-04-09 02:59:49.164735 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-09 02:59:49.164739 | orchestrator | Thursday 09 April 2026 02:59:07 +0000 (0:00:00.620) 0:05:47.424 ******** 2026-04-09 02:59:49.164743 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:59:49.164747 | orchestrator | 2026-04-09 02:59:49.164761 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-09 02:59:49.164765 | orchestrator | Thursday 09 April 2026 02:59:07 +0000 (0:00:00.572) 0:05:47.996 ******** 2026-04-09 02:59:49.164769 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:59:49.164773 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:59:49.164777 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:59:49.164780 | orchestrator | 2026-04-09 02:59:49.164785 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-09 02:59:49.164789 | orchestrator | Thursday 09 April 2026 02:59:08 +0000 (0:00:00.357) 0:05:48.354 ******** 2026-04-09 02:59:49.164794 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:59:49.164798 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:59:49.164802 | orchestrator | skipping: [testbed-node-2] 2026-04-09 02:59:49.164806 | orchestrator | 2026-04-09 02:59:49.164810 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-09 02:59:49.164815 | orchestrator | Thursday 09 April 2026 02:59:08 +0000 (0:00:00.690) 0:05:49.044 ******** 2026-04-09 02:59:49.164819 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 02:59:49.164824 | orchestrator | 2026-04-09 02:59:49.164828 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-09 02:59:49.164832 | orchestrator | Thursday 09 April 2026 02:59:09 +0000 (0:00:00.565) 0:05:49.609 ******** 2026-04-09 02:59:49.164836 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:59:49.164841 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:59:49.164845 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:59:49.164849 | orchestrator | 2026-04-09 02:59:49.164853 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-09 02:59:49.164858 | orchestrator | Thursday 09 April 2026 02:59:10 +0000 (0:00:01.186) 0:05:50.796 ******** 2026-04-09 02:59:49.164862 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:59:49.164866 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:59:49.164871 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:59:49.164875 | orchestrator | 2026-04-09 02:59:49.164880 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-09 02:59:49.164884 | orchestrator | Thursday 09 April 2026 02:59:12 +0000 (0:00:01.502) 0:05:52.299 ******** 2026-04-09 02:59:49.164888 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:59:49.164893 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:59:49.164897 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:59:49.164901 | orchestrator | 2026-04-09 02:59:49.164906 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-09 02:59:49.164913 | orchestrator | Thursday 09 April 2026 02:59:13 +0000 (0:00:01.803) 0:05:54.102 ******** 2026-04-09 02:59:49.164917 | orchestrator | changed: [testbed-node-0] 2026-04-09 02:59:49.164922 | orchestrator | changed: [testbed-node-1] 2026-04-09 02:59:49.164926 | orchestrator | changed: [testbed-node-2] 2026-04-09 02:59:49.164930 | orchestrator | 2026-04-09 02:59:49.164935 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-09 02:59:49.164939 | orchestrator | Thursday 09 April 2026 02:59:15 +0000 (0:00:02.018) 0:05:56.120 ******** 2026-04-09 02:59:49.164944 | orchestrator | skipping: [testbed-node-0] 2026-04-09 02:59:49.164948 | orchestrator | skipping: [testbed-node-1] 2026-04-09 02:59:49.164952 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-09 02:59:49.164960 | orchestrator | 2026-04-09 02:59:49.164964 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-09 02:59:49.164969 | orchestrator | Thursday 09 April 2026 02:59:16 +0000 (0:00:00.706) 0:05:56.827 ******** 2026-04-09 02:59:49.164973 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-09 02:59:49.164977 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-09 02:59:49.164981 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-04-09 02:59:49.164986 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-04-09 02:59:49.164991 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-09 02:59:49.164995 | orchestrator | 2026-04-09 02:59:49.164999 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-09 02:59:49.165004 | orchestrator | Thursday 09 April 2026 02:59:40 +0000 (0:00:24.300) 0:06:21.128 ******** 2026-04-09 02:59:49.165008 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-09 02:59:49.165012 | orchestrator | 2026-04-09 02:59:49.165016 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-09 02:59:49.165021 | orchestrator | Thursday 09 April 2026 02:59:42 +0000 (0:00:01.270) 0:06:22.398 ******** 2026-04-09 02:59:49.165025 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:59:49.165029 | orchestrator | 2026-04-09 02:59:49.165034 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-09 02:59:49.165038 | orchestrator | Thursday 09 April 2026 02:59:42 +0000 (0:00:00.352) 0:06:22.750 ******** 2026-04-09 02:59:49.165042 | orchestrator | ok: [testbed-node-2] 2026-04-09 02:59:49.165047 | orchestrator | 2026-04-09 02:59:49.165051 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-09 02:59:49.165055 | orchestrator | Thursday 09 April 2026 02:59:42 +0000 (0:00:00.169) 0:06:22.920 ******** 2026-04-09 02:59:49.165059 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-09 02:59:49.165063 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-09 02:59:49.165067 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-09 02:59:49.165071 | orchestrator | 2026-04-09 02:59:49.165074 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-09 02:59:49.165081 | orchestrator | Thursday 09 April 2026 02:59:49 +0000 (0:00:06.436) 0:06:29.357 ******** 2026-04-09 03:00:11.679129 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-09 03:00:11.679231 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-09 03:00:11.679245 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-09 03:00:11.679254 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-09 03:00:11.679263 | orchestrator | 2026-04-09 03:00:11.679273 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-09 03:00:11.679283 | orchestrator | Thursday 09 April 2026 02:59:54 +0000 (0:00:05.185) 0:06:34.542 ******** 2026-04-09 03:00:11.679291 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:00:11.679302 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:00:11.679310 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:00:11.679320 | orchestrator | 2026-04-09 03:00:11.679329 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-09 03:00:11.679387 | orchestrator | Thursday 09 April 2026 02:59:55 +0000 (0:00:00.714) 0:06:35.257 ******** 2026-04-09 03:00:11.679396 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:00:11.679426 | orchestrator | 2026-04-09 03:00:11.679436 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-09 03:00:11.679445 | orchestrator | Thursday 09 April 2026 02:59:55 +0000 (0:00:00.590) 0:06:35.847 ******** 2026-04-09 03:00:11.679454 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:00:11.679463 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:00:11.679471 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:00:11.679480 | orchestrator | 2026-04-09 03:00:11.679489 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-09 03:00:11.679498 | orchestrator | Thursday 09 April 2026 02:59:56 +0000 (0:00:00.635) 0:06:36.483 ******** 2026-04-09 03:00:11.679506 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:00:11.679515 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:00:11.679524 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:00:11.679532 | orchestrator | 2026-04-09 03:00:11.679541 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-09 03:00:11.679550 | orchestrator | Thursday 09 April 2026 02:59:57 +0000 (0:00:01.186) 0:06:37.670 ******** 2026-04-09 03:00:11.679559 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 03:00:11.679568 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 03:00:11.679591 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 03:00:11.679600 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:00:11.679609 | orchestrator | 2026-04-09 03:00:11.679617 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-09 03:00:11.679626 | orchestrator | Thursday 09 April 2026 02:59:58 +0000 (0:00:00.680) 0:06:38.350 ******** 2026-04-09 03:00:11.679635 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:00:11.679644 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:00:11.679652 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:00:11.679661 | orchestrator | 2026-04-09 03:00:11.679670 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-09 03:00:11.679680 | orchestrator | 2026-04-09 03:00:11.679692 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 03:00:11.679703 | orchestrator | Thursday 09 April 2026 02:59:59 +0000 (0:00:00.975) 0:06:39.326 ******** 2026-04-09 03:00:11.679715 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:00:11.679726 | orchestrator | 2026-04-09 03:00:11.679737 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 03:00:11.679747 | orchestrator | Thursday 09 April 2026 02:59:59 +0000 (0:00:00.564) 0:06:39.890 ******** 2026-04-09 03:00:11.679757 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:00:11.679766 | orchestrator | 2026-04-09 03:00:11.679775 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 03:00:11.679784 | orchestrator | Thursday 09 April 2026 03:00:00 +0000 (0:00:00.831) 0:06:40.722 ******** 2026-04-09 03:00:11.679793 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:00:11.679802 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:00:11.679811 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:00:11.679820 | orchestrator | 2026-04-09 03:00:11.679828 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 03:00:11.679837 | orchestrator | Thursday 09 April 2026 03:00:00 +0000 (0:00:00.366) 0:06:41.088 ******** 2026-04-09 03:00:11.679846 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:00:11.679855 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:00:11.679864 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:00:11.679872 | orchestrator | 2026-04-09 03:00:11.679881 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 03:00:11.679890 | orchestrator | Thursday 09 April 2026 03:00:01 +0000 (0:00:00.821) 0:06:41.910 ******** 2026-04-09 03:00:11.679899 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:00:11.679908 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:00:11.679923 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:00:11.679932 | orchestrator | 2026-04-09 03:00:11.679941 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 03:00:11.679950 | orchestrator | Thursday 09 April 2026 03:00:02 +0000 (0:00:00.765) 0:06:42.676 ******** 2026-04-09 03:00:11.679958 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:00:11.679967 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:00:11.679976 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:00:11.679985 | orchestrator | 2026-04-09 03:00:11.679994 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 03:00:11.680003 | orchestrator | Thursday 09 April 2026 03:00:03 +0000 (0:00:01.027) 0:06:43.704 ******** 2026-04-09 03:00:11.680012 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:00:11.680021 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:00:11.680029 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:00:11.680038 | orchestrator | 2026-04-09 03:00:11.680061 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 03:00:11.680070 | orchestrator | Thursday 09 April 2026 03:00:03 +0000 (0:00:00.363) 0:06:44.067 ******** 2026-04-09 03:00:11.680079 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:00:11.680088 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:00:11.680097 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:00:11.680106 | orchestrator | 2026-04-09 03:00:11.680114 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 03:00:11.680123 | orchestrator | Thursday 09 April 2026 03:00:04 +0000 (0:00:00.334) 0:06:44.402 ******** 2026-04-09 03:00:11.680132 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:00:11.680141 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:00:11.680149 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:00:11.680158 | orchestrator | 2026-04-09 03:00:11.680167 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 03:00:11.680176 | orchestrator | Thursday 09 April 2026 03:00:04 +0000 (0:00:00.361) 0:06:44.764 ******** 2026-04-09 03:00:11.680185 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:00:11.680194 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:00:11.680202 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:00:11.680211 | orchestrator | 2026-04-09 03:00:11.680220 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 03:00:11.680229 | orchestrator | Thursday 09 April 2026 03:00:05 +0000 (0:00:01.073) 0:06:45.837 ******** 2026-04-09 03:00:11.680237 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:00:11.680246 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:00:11.680255 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:00:11.680263 | orchestrator | 2026-04-09 03:00:11.680272 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 03:00:11.680281 | orchestrator | Thursday 09 April 2026 03:00:06 +0000 (0:00:00.774) 0:06:46.612 ******** 2026-04-09 03:00:11.680290 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:00:11.680299 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:00:11.680307 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:00:11.680316 | orchestrator | 2026-04-09 03:00:11.680325 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 03:00:11.680349 | orchestrator | Thursday 09 April 2026 03:00:06 +0000 (0:00:00.379) 0:06:46.991 ******** 2026-04-09 03:00:11.680358 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:00:11.680367 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:00:11.680376 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:00:11.680385 | orchestrator | 2026-04-09 03:00:11.680394 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 03:00:11.680407 | orchestrator | Thursday 09 April 2026 03:00:07 +0000 (0:00:00.346) 0:06:47.337 ******** 2026-04-09 03:00:11.680416 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:00:11.680424 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:00:11.680433 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:00:11.680442 | orchestrator | 2026-04-09 03:00:11.680457 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 03:00:11.680465 | orchestrator | Thursday 09 April 2026 03:00:07 +0000 (0:00:00.666) 0:06:48.004 ******** 2026-04-09 03:00:11.680474 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:00:11.680483 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:00:11.680492 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:00:11.680501 | orchestrator | 2026-04-09 03:00:11.680510 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 03:00:11.680518 | orchestrator | Thursday 09 April 2026 03:00:08 +0000 (0:00:00.397) 0:06:48.402 ******** 2026-04-09 03:00:11.680527 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:00:11.680536 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:00:11.680544 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:00:11.680553 | orchestrator | 2026-04-09 03:00:11.680562 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 03:00:11.680571 | orchestrator | Thursday 09 April 2026 03:00:08 +0000 (0:00:00.351) 0:06:48.753 ******** 2026-04-09 03:00:11.680580 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:00:11.680588 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:00:11.680597 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:00:11.680606 | orchestrator | 2026-04-09 03:00:11.680615 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 03:00:11.680624 | orchestrator | Thursday 09 April 2026 03:00:08 +0000 (0:00:00.347) 0:06:49.101 ******** 2026-04-09 03:00:11.680632 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:00:11.680641 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:00:11.680650 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:00:11.680659 | orchestrator | 2026-04-09 03:00:11.680668 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 03:00:11.680677 | orchestrator | Thursday 09 April 2026 03:00:09 +0000 (0:00:00.670) 0:06:49.772 ******** 2026-04-09 03:00:11.680686 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:00:11.680694 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:00:11.680703 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:00:11.680712 | orchestrator | 2026-04-09 03:00:11.680721 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 03:00:11.680729 | orchestrator | Thursday 09 April 2026 03:00:09 +0000 (0:00:00.338) 0:06:50.110 ******** 2026-04-09 03:00:11.680738 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:00:11.680747 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:00:11.680756 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:00:11.680764 | orchestrator | 2026-04-09 03:00:11.680773 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 03:00:11.680782 | orchestrator | Thursday 09 April 2026 03:00:10 +0000 (0:00:00.449) 0:06:50.560 ******** 2026-04-09 03:00:11.680791 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:00:11.680800 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:00:11.680808 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:00:11.680817 | orchestrator | 2026-04-09 03:00:11.680826 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-09 03:00:11.680835 | orchestrator | Thursday 09 April 2026 03:00:11 +0000 (0:00:00.920) 0:06:51.480 ******** 2026-04-09 03:00:11.680844 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:00:11.680852 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:00:11.680861 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:00:11.680870 | orchestrator | 2026-04-09 03:00:11.680879 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-09 03:00:11.680893 | orchestrator | Thursday 09 April 2026 03:00:11 +0000 (0:00:00.393) 0:06:51.874 ******** 2026-04-09 03:01:14.783309 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 03:01:14.783394 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 03:01:14.783402 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 03:01:14.783425 | orchestrator | 2026-04-09 03:01:14.783431 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-09 03:01:14.783438 | orchestrator | Thursday 09 April 2026 03:00:12 +0000 (0:00:00.702) 0:06:52.577 ******** 2026-04-09 03:01:14.783443 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:01:14.783449 | orchestrator | 2026-04-09 03:01:14.783454 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-09 03:01:14.783459 | orchestrator | Thursday 09 April 2026 03:00:13 +0000 (0:00:00.840) 0:06:53.417 ******** 2026-04-09 03:01:14.783465 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:01:14.783471 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:01:14.783476 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:01:14.783481 | orchestrator | 2026-04-09 03:01:14.783486 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-09 03:01:14.783492 | orchestrator | Thursday 09 April 2026 03:00:13 +0000 (0:00:00.368) 0:06:53.785 ******** 2026-04-09 03:01:14.783497 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:01:14.783502 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:01:14.783507 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:01:14.783512 | orchestrator | 2026-04-09 03:01:14.783517 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-09 03:01:14.783522 | orchestrator | Thursday 09 April 2026 03:00:13 +0000 (0:00:00.394) 0:06:54.179 ******** 2026-04-09 03:01:14.783527 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:01:14.783533 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:01:14.783538 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:01:14.783543 | orchestrator | 2026-04-09 03:01:14.783548 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-09 03:01:14.783553 | orchestrator | Thursday 09 April 2026 03:00:14 +0000 (0:00:00.654) 0:06:54.834 ******** 2026-04-09 03:01:14.783559 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:01:14.783564 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:01:14.783568 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:01:14.783573 | orchestrator | 2026-04-09 03:01:14.783588 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-09 03:01:14.783594 | orchestrator | Thursday 09 April 2026 03:00:15 +0000 (0:00:00.672) 0:06:55.506 ******** 2026-04-09 03:01:14.783599 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-09 03:01:14.783605 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-09 03:01:14.783610 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-09 03:01:14.783615 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-09 03:01:14.783620 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-09 03:01:14.783626 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-09 03:01:14.783631 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-09 03:01:14.783636 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-09 03:01:14.783641 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-09 03:01:14.783646 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-09 03:01:14.783651 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-09 03:01:14.783656 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-09 03:01:14.783661 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-09 03:01:14.783666 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-09 03:01:14.783676 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-09 03:01:14.783681 | orchestrator | 2026-04-09 03:01:14.783686 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-09 03:01:14.783691 | orchestrator | Thursday 09 April 2026 03:00:17 +0000 (0:00:02.112) 0:06:57.618 ******** 2026-04-09 03:01:14.783696 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:01:14.783701 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:01:14.783706 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:01:14.783711 | orchestrator | 2026-04-09 03:01:14.783716 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-09 03:01:14.783721 | orchestrator | Thursday 09 April 2026 03:00:17 +0000 (0:00:00.350) 0:06:57.969 ******** 2026-04-09 03:01:14.783726 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:01:14.783731 | orchestrator | 2026-04-09 03:01:14.783736 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-09 03:01:14.783741 | orchestrator | Thursday 09 April 2026 03:00:18 +0000 (0:00:00.873) 0:06:58.842 ******** 2026-04-09 03:01:14.783746 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-09 03:01:14.783752 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-09 03:01:14.783769 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-09 03:01:14.783774 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-09 03:01:14.783779 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-09 03:01:14.783785 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-09 03:01:14.783790 | orchestrator | 2026-04-09 03:01:14.783796 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-09 03:01:14.783804 | orchestrator | Thursday 09 April 2026 03:00:19 +0000 (0:00:01.020) 0:06:59.862 ******** 2026-04-09 03:01:14.783813 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:01:14.783821 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 03:01:14.783830 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 03:01:14.783838 | orchestrator | 2026-04-09 03:01:14.783847 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-09 03:01:14.783856 | orchestrator | Thursday 09 April 2026 03:00:21 +0000 (0:00:02.110) 0:07:01.973 ******** 2026-04-09 03:01:14.783865 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 03:01:14.783874 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 03:01:14.783882 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:01:14.783891 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 03:01:14.783899 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-09 03:01:14.783908 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:01:14.783916 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 03:01:14.783924 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-09 03:01:14.783933 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:01:14.783941 | orchestrator | 2026-04-09 03:01:14.783949 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-09 03:01:14.783958 | orchestrator | Thursday 09 April 2026 03:00:22 +0000 (0:00:01.146) 0:07:03.120 ******** 2026-04-09 03:01:14.783966 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 03:01:14.783975 | orchestrator | 2026-04-09 03:01:14.783983 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-09 03:01:14.783992 | orchestrator | Thursday 09 April 2026 03:00:24 +0000 (0:00:02.055) 0:07:05.175 ******** 2026-04-09 03:01:14.784004 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:01:14.784013 | orchestrator | 2026-04-09 03:01:14.784027 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-09 03:01:14.784034 | orchestrator | Thursday 09 April 2026 03:00:25 +0000 (0:00:00.920) 0:07:06.095 ******** 2026-04-09 03:01:14.784042 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'}) 2026-04-09 03:01:14.784051 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'}) 2026-04-09 03:01:14.784058 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'}) 2026-04-09 03:01:14.784066 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'}) 2026-04-09 03:01:14.784073 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'}) 2026-04-09 03:01:14.784080 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'}) 2026-04-09 03:01:14.784088 | orchestrator | 2026-04-09 03:01:14.784095 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-09 03:01:14.784102 | orchestrator | Thursday 09 April 2026 03:01:09 +0000 (0:00:43.402) 0:07:49.498 ******** 2026-04-09 03:01:14.784110 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:01:14.784117 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:01:14.784124 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:01:14.784131 | orchestrator | 2026-04-09 03:01:14.784139 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-09 03:01:14.784146 | orchestrator | Thursday 09 April 2026 03:01:09 +0000 (0:00:00.353) 0:07:49.852 ******** 2026-04-09 03:01:14.784153 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:01:14.784160 | orchestrator | 2026-04-09 03:01:14.784168 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-09 03:01:14.784175 | orchestrator | Thursday 09 April 2026 03:01:10 +0000 (0:00:00.899) 0:07:50.751 ******** 2026-04-09 03:01:14.784182 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:01:14.784189 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:01:14.784197 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:01:14.784204 | orchestrator | 2026-04-09 03:01:14.784238 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-09 03:01:14.784246 | orchestrator | Thursday 09 April 2026 03:01:11 +0000 (0:00:00.701) 0:07:51.453 ******** 2026-04-09 03:01:14.784254 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:01:14.784261 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:01:14.784268 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:01:14.784275 | orchestrator | 2026-04-09 03:01:14.784282 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-09 03:01:14.784290 | orchestrator | Thursday 09 April 2026 03:01:13 +0000 (0:00:02.610) 0:07:54.064 ******** 2026-04-09 03:01:14.784301 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:01:54.462612 | orchestrator | 2026-04-09 03:01:54.462714 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-09 03:01:54.462729 | orchestrator | Thursday 09 April 2026 03:01:14 +0000 (0:00:00.910) 0:07:54.974 ******** 2026-04-09 03:01:54.462739 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:01:54.462757 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:01:54.462775 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:01:54.462798 | orchestrator | 2026-04-09 03:01:54.462813 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-09 03:01:54.462827 | orchestrator | Thursday 09 April 2026 03:01:16 +0000 (0:00:01.238) 0:07:56.212 ******** 2026-04-09 03:01:54.462868 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:01:54.462885 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:01:54.462899 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:01:54.462913 | orchestrator | 2026-04-09 03:01:54.462928 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-09 03:01:54.462942 | orchestrator | Thursday 09 April 2026 03:01:17 +0000 (0:00:01.170) 0:07:57.382 ******** 2026-04-09 03:01:54.462958 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:01:54.462973 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:01:54.462987 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:01:54.462996 | orchestrator | 2026-04-09 03:01:54.463005 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-09 03:01:54.463013 | orchestrator | Thursday 09 April 2026 03:01:19 +0000 (0:00:02.110) 0:07:59.493 ******** 2026-04-09 03:01:54.463022 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:01:54.463031 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:01:54.463040 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:01:54.463048 | orchestrator | 2026-04-09 03:01:54.463057 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-09 03:01:54.463066 | orchestrator | Thursday 09 April 2026 03:01:19 +0000 (0:00:00.390) 0:07:59.883 ******** 2026-04-09 03:01:54.463074 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:01:54.463088 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:01:54.463107 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:01:54.463127 | orchestrator | 2026-04-09 03:01:54.463190 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-09 03:01:54.463207 | orchestrator | Thursday 09 April 2026 03:01:20 +0000 (0:00:00.370) 0:08:00.254 ******** 2026-04-09 03:01:54.463238 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-04-09 03:01:54.463253 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-09 03:01:54.463269 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-09 03:01:54.463285 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-04-09 03:01:54.463301 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-04-09 03:01:54.463316 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-04-09 03:01:54.463329 | orchestrator | 2026-04-09 03:01:54.463341 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-09 03:01:54.463351 | orchestrator | Thursday 09 April 2026 03:01:21 +0000 (0:00:01.064) 0:08:01.319 ******** 2026-04-09 03:01:54.463362 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-04-09 03:01:54.463373 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-04-09 03:01:54.463383 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-09 03:01:54.463394 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-04-09 03:01:54.463404 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-09 03:01:54.463414 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-09 03:01:54.463425 | orchestrator | 2026-04-09 03:01:54.463435 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-09 03:01:54.463446 | orchestrator | Thursday 09 April 2026 03:01:23 +0000 (0:00:02.563) 0:08:03.882 ******** 2026-04-09 03:01:54.463456 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-04-09 03:01:54.463466 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-09 03:01:54.463477 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-04-09 03:01:54.463487 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-04-09 03:01:54.463497 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-09 03:01:54.463507 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-09 03:01:54.463518 | orchestrator | 2026-04-09 03:01:54.463527 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-09 03:01:54.463535 | orchestrator | Thursday 09 April 2026 03:01:27 +0000 (0:00:03.693) 0:08:07.575 ******** 2026-04-09 03:01:54.463544 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:01:54.463553 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:01:54.463572 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-09 03:01:54.463581 | orchestrator | 2026-04-09 03:01:54.463590 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-09 03:01:54.463599 | orchestrator | Thursday 09 April 2026 03:01:30 +0000 (0:00:03.007) 0:08:10.583 ******** 2026-04-09 03:01:54.463607 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:01:54.463616 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:01:54.463625 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-09 03:01:54.463635 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-09 03:01:54.463644 | orchestrator | 2026-04-09 03:01:54.463652 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-09 03:01:54.463661 | orchestrator | Thursday 09 April 2026 03:01:43 +0000 (0:00:12.887) 0:08:23.471 ******** 2026-04-09 03:01:54.463669 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:01:54.463678 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:01:54.463686 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:01:54.463695 | orchestrator | 2026-04-09 03:01:54.463704 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-09 03:01:54.463713 | orchestrator | Thursday 09 April 2026 03:01:44 +0000 (0:00:01.315) 0:08:24.787 ******** 2026-04-09 03:01:54.463721 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:01:54.463730 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:01:54.463738 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:01:54.463747 | orchestrator | 2026-04-09 03:01:54.463756 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-09 03:01:54.463783 | orchestrator | Thursday 09 April 2026 03:01:44 +0000 (0:00:00.408) 0:08:25.196 ******** 2026-04-09 03:01:54.463793 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:01:54.463802 | orchestrator | 2026-04-09 03:01:54.463811 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-09 03:01:54.463819 | orchestrator | Thursday 09 April 2026 03:01:45 +0000 (0:00:00.989) 0:08:26.186 ******** 2026-04-09 03:01:54.463828 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 03:01:54.463837 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 03:01:54.463846 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 03:01:54.463854 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:01:54.463863 | orchestrator | 2026-04-09 03:01:54.463871 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-09 03:01:54.463880 | orchestrator | Thursday 09 April 2026 03:01:46 +0000 (0:00:00.604) 0:08:26.790 ******** 2026-04-09 03:01:54.463888 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:01:54.463897 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:01:54.463905 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:01:54.463914 | orchestrator | 2026-04-09 03:01:54.463940 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-09 03:01:54.463950 | orchestrator | Thursday 09 April 2026 03:01:46 +0000 (0:00:00.333) 0:08:27.124 ******** 2026-04-09 03:01:54.463958 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:01:54.463967 | orchestrator | 2026-04-09 03:01:54.463975 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-09 03:01:54.463984 | orchestrator | Thursday 09 April 2026 03:01:47 +0000 (0:00:00.295) 0:08:27.419 ******** 2026-04-09 03:01:54.463992 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:01:54.464001 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:01:54.464010 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:01:54.464018 | orchestrator | 2026-04-09 03:01:54.464027 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-09 03:01:54.464035 | orchestrator | Thursday 09 April 2026 03:01:47 +0000 (0:00:00.648) 0:08:28.068 ******** 2026-04-09 03:01:54.464052 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:01:54.464061 | orchestrator | 2026-04-09 03:01:54.464075 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-09 03:01:54.464095 | orchestrator | Thursday 09 April 2026 03:01:48 +0000 (0:00:00.246) 0:08:28.315 ******** 2026-04-09 03:01:54.464104 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:01:54.464113 | orchestrator | 2026-04-09 03:01:54.464121 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-09 03:01:54.464130 | orchestrator | Thursday 09 April 2026 03:01:48 +0000 (0:00:00.260) 0:08:28.576 ******** 2026-04-09 03:01:54.464139 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:01:54.464176 | orchestrator | 2026-04-09 03:01:54.464191 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-09 03:01:54.464206 | orchestrator | Thursday 09 April 2026 03:01:48 +0000 (0:00:00.145) 0:08:28.722 ******** 2026-04-09 03:01:54.464221 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:01:54.464235 | orchestrator | 2026-04-09 03:01:54.464247 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-09 03:01:54.464256 | orchestrator | Thursday 09 April 2026 03:01:48 +0000 (0:00:00.273) 0:08:28.995 ******** 2026-04-09 03:01:54.464264 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:01:54.464273 | orchestrator | 2026-04-09 03:01:54.464282 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-09 03:01:54.464290 | orchestrator | Thursday 09 April 2026 03:01:49 +0000 (0:00:00.248) 0:08:29.244 ******** 2026-04-09 03:01:54.464299 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 03:01:54.464307 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 03:01:54.464316 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 03:01:54.464325 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:01:54.464333 | orchestrator | 2026-04-09 03:01:54.464342 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-09 03:01:54.464350 | orchestrator | Thursday 09 April 2026 03:01:49 +0000 (0:00:00.450) 0:08:29.695 ******** 2026-04-09 03:01:54.464359 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:01:54.464368 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:01:54.464376 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:01:54.464385 | orchestrator | 2026-04-09 03:01:54.464393 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-09 03:01:54.464402 | orchestrator | Thursday 09 April 2026 03:01:49 +0000 (0:00:00.323) 0:08:30.018 ******** 2026-04-09 03:01:54.464411 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:01:54.464419 | orchestrator | 2026-04-09 03:01:54.464428 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-09 03:01:54.464436 | orchestrator | Thursday 09 April 2026 03:01:50 +0000 (0:00:00.246) 0:08:30.264 ******** 2026-04-09 03:01:54.464445 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:01:54.464453 | orchestrator | 2026-04-09 03:01:54.464462 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-09 03:01:54.464470 | orchestrator | 2026-04-09 03:01:54.464479 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 03:01:54.464487 | orchestrator | Thursday 09 April 2026 03:01:51 +0000 (0:00:01.406) 0:08:31.671 ******** 2026-04-09 03:01:54.464497 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:01:54.464508 | orchestrator | 2026-04-09 03:01:54.464516 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 03:01:54.464525 | orchestrator | Thursday 09 April 2026 03:01:52 +0000 (0:00:01.378) 0:08:33.050 ******** 2026-04-09 03:01:54.464541 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:02:22.940839 | orchestrator | 2026-04-09 03:02:22.940954 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 03:02:22.940972 | orchestrator | Thursday 09 April 2026 03:01:54 +0000 (0:00:01.599) 0:08:34.649 ******** 2026-04-09 03:02:22.940985 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:02:22.940997 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:02:22.941009 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:02:22.941020 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:02:22.941032 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:02:22.941043 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:02:22.941054 | orchestrator | 2026-04-09 03:02:22.941065 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 03:02:22.941076 | orchestrator | Thursday 09 April 2026 03:01:55 +0000 (0:00:01.401) 0:08:36.050 ******** 2026-04-09 03:02:22.941087 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:02:22.941164 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:02:22.941177 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:02:22.941188 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:02:22.941199 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:02:22.941210 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:02:22.941221 | orchestrator | 2026-04-09 03:02:22.941232 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 03:02:22.941243 | orchestrator | Thursday 09 April 2026 03:01:56 +0000 (0:00:00.763) 0:08:36.814 ******** 2026-04-09 03:02:22.941254 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:02:22.941265 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:02:22.941290 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:02:22.941301 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:02:22.941324 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:02:22.941336 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:02:22.941347 | orchestrator | 2026-04-09 03:02:22.941358 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 03:02:22.941369 | orchestrator | Thursday 09 April 2026 03:01:57 +0000 (0:00:00.983) 0:08:37.798 ******** 2026-04-09 03:02:22.941383 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:02:22.941396 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:02:22.941410 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:02:22.941424 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:02:22.941437 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:02:22.941450 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:02:22.941463 | orchestrator | 2026-04-09 03:02:22.941494 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 03:02:22.941507 | orchestrator | Thursday 09 April 2026 03:01:58 +0000 (0:00:00.787) 0:08:38.586 ******** 2026-04-09 03:02:22.941520 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:02:22.941533 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:02:22.941545 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:02:22.941559 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:02:22.941571 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:02:22.941584 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:02:22.941597 | orchestrator | 2026-04-09 03:02:22.941610 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 03:02:22.941623 | orchestrator | Thursday 09 April 2026 03:01:59 +0000 (0:00:01.403) 0:08:39.989 ******** 2026-04-09 03:02:22.941636 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:02:22.941649 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:02:22.941662 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:02:22.941675 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:02:22.941688 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:02:22.941702 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:02:22.941715 | orchestrator | 2026-04-09 03:02:22.941728 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 03:02:22.941741 | orchestrator | Thursday 09 April 2026 03:02:00 +0000 (0:00:00.743) 0:08:40.732 ******** 2026-04-09 03:02:22.941775 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:02:22.941787 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:02:22.941798 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:02:22.941808 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:02:22.941820 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:02:22.941830 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:02:22.941841 | orchestrator | 2026-04-09 03:02:22.941852 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 03:02:22.941863 | orchestrator | Thursday 09 April 2026 03:02:01 +0000 (0:00:00.944) 0:08:41.677 ******** 2026-04-09 03:02:22.941874 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:02:22.941885 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:02:22.941896 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:02:22.941907 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:02:22.941917 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:02:22.941928 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:02:22.941939 | orchestrator | 2026-04-09 03:02:22.941950 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 03:02:22.941961 | orchestrator | Thursday 09 April 2026 03:02:02 +0000 (0:00:01.067) 0:08:42.745 ******** 2026-04-09 03:02:22.941971 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:02:22.941982 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:02:22.941993 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:02:22.942003 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:02:22.942086 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:02:22.942122 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:02:22.942133 | orchestrator | 2026-04-09 03:02:22.942145 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 03:02:22.942155 | orchestrator | Thursday 09 April 2026 03:02:04 +0000 (0:00:01.472) 0:08:44.217 ******** 2026-04-09 03:02:22.942166 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:02:22.942177 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:02:22.942188 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:02:22.942199 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:02:22.942210 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:02:22.942221 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:02:22.942232 | orchestrator | 2026-04-09 03:02:22.942243 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 03:02:22.942253 | orchestrator | Thursday 09 April 2026 03:02:04 +0000 (0:00:00.693) 0:08:44.911 ******** 2026-04-09 03:02:22.942264 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:02:22.942275 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:02:22.942286 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:02:22.942297 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:02:22.942308 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:02:22.942337 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:02:22.942349 | orchestrator | 2026-04-09 03:02:22.942359 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 03:02:22.942370 | orchestrator | Thursday 09 April 2026 03:02:05 +0000 (0:00:01.004) 0:08:45.916 ******** 2026-04-09 03:02:22.942381 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:02:22.942392 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:02:22.942402 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:02:22.942413 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:02:22.942424 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:02:22.942435 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:02:22.942445 | orchestrator | 2026-04-09 03:02:22.942456 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 03:02:22.942467 | orchestrator | Thursday 09 April 2026 03:02:06 +0000 (0:00:00.765) 0:08:46.682 ******** 2026-04-09 03:02:22.942478 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:02:22.942488 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:02:22.942499 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:02:22.942510 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:02:22.942521 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:02:22.942544 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:02:22.942555 | orchestrator | 2026-04-09 03:02:22.942566 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 03:02:22.942576 | orchestrator | Thursday 09 April 2026 03:02:07 +0000 (0:00:01.058) 0:08:47.740 ******** 2026-04-09 03:02:22.942587 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:02:22.942598 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:02:22.942608 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:02:22.942619 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:02:22.942630 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:02:22.942641 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:02:22.942683 | orchestrator | 2026-04-09 03:02:22.942707 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 03:02:22.942730 | orchestrator | Thursday 09 April 2026 03:02:08 +0000 (0:00:00.711) 0:08:48.452 ******** 2026-04-09 03:02:22.942742 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:02:22.942765 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:02:22.942788 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:02:22.942799 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:02:22.942810 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:02:22.942821 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:02:22.942832 | orchestrator | 2026-04-09 03:02:22.942842 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 03:02:22.942853 | orchestrator | Thursday 09 April 2026 03:02:09 +0000 (0:00:00.945) 0:08:49.397 ******** 2026-04-09 03:02:22.942864 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:02:22.942875 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:02:22.942886 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:02:22.942897 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:02:22.942908 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:02:22.942918 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:02:22.942929 | orchestrator | 2026-04-09 03:02:22.942940 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 03:02:22.942951 | orchestrator | Thursday 09 April 2026 03:02:09 +0000 (0:00:00.675) 0:08:50.072 ******** 2026-04-09 03:02:22.942962 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:02:22.942973 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:02:22.942983 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:02:22.942994 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:02:22.943005 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:02:22.943015 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:02:22.943026 | orchestrator | 2026-04-09 03:02:22.943037 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 03:02:22.943048 | orchestrator | Thursday 09 April 2026 03:02:10 +0000 (0:00:00.967) 0:08:51.040 ******** 2026-04-09 03:02:22.943058 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:02:22.943069 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:02:22.943093 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:02:22.943126 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:02:22.943148 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:02:22.943159 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:02:22.943169 | orchestrator | 2026-04-09 03:02:22.943180 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 03:02:22.943191 | orchestrator | Thursday 09 April 2026 03:02:11 +0000 (0:00:00.726) 0:08:51.766 ******** 2026-04-09 03:02:22.943202 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:02:22.943212 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:02:22.943223 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:02:22.943234 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:02:22.943244 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:02:22.943255 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:02:22.943266 | orchestrator | 2026-04-09 03:02:22.943277 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-09 03:02:22.943287 | orchestrator | Thursday 09 April 2026 03:02:13 +0000 (0:00:01.457) 0:08:53.224 ******** 2026-04-09 03:02:22.943307 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 03:02:22.943318 | orchestrator | 2026-04-09 03:02:22.943329 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-09 03:02:22.943340 | orchestrator | Thursday 09 April 2026 03:02:17 +0000 (0:00:04.197) 0:08:57.422 ******** 2026-04-09 03:02:22.943351 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 03:02:22.943361 | orchestrator | 2026-04-09 03:02:22.943372 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-09 03:02:22.943383 | orchestrator | Thursday 09 April 2026 03:02:20 +0000 (0:00:02.818) 0:09:00.240 ******** 2026-04-09 03:02:22.943394 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:02:22.943404 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:02:22.943415 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:02:22.943426 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:02:22.943437 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:02:22.943447 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:02:22.943458 | orchestrator | 2026-04-09 03:02:22.943469 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-09 03:02:22.943480 | orchestrator | Thursday 09 April 2026 03:02:21 +0000 (0:00:01.583) 0:09:01.824 ******** 2026-04-09 03:02:22.943491 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:02:22.943501 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:02:22.943519 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:02:47.879572 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:02:47.879672 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:02:47.879684 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:02:47.879694 | orchestrator | 2026-04-09 03:02:47.879704 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-09 03:02:47.879751 | orchestrator | Thursday 09 April 2026 03:02:22 +0000 (0:00:01.305) 0:09:03.130 ******** 2026-04-09 03:02:47.879761 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:02:47.879770 | orchestrator | 2026-04-09 03:02:47.879779 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-09 03:02:47.879787 | orchestrator | Thursday 09 April 2026 03:02:24 +0000 (0:00:01.488) 0:09:04.619 ******** 2026-04-09 03:02:47.879795 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:02:47.879819 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:02:47.879836 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:02:47.879845 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:02:47.879853 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:02:47.879861 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:02:47.879869 | orchestrator | 2026-04-09 03:02:47.879877 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-09 03:02:47.879886 | orchestrator | Thursday 09 April 2026 03:02:26 +0000 (0:00:01.758) 0:09:06.377 ******** 2026-04-09 03:02:47.879894 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:02:47.879902 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:02:47.879910 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:02:47.879918 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:02:47.879926 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:02:47.879934 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:02:47.879942 | orchestrator | 2026-04-09 03:02:47.879950 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-09 03:02:47.879958 | orchestrator | Thursday 09 April 2026 03:02:30 +0000 (0:00:04.480) 0:09:10.858 ******** 2026-04-09 03:02:47.879971 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:02:47.879979 | orchestrator | 2026-04-09 03:02:47.879987 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-09 03:02:47.880014 | orchestrator | Thursday 09 April 2026 03:02:32 +0000 (0:00:01.382) 0:09:12.240 ******** 2026-04-09 03:02:47.880023 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:02:47.880032 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:02:47.880040 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:02:47.880048 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:02:47.880056 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:02:47.880082 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:02:47.880090 | orchestrator | 2026-04-09 03:02:47.880098 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-09 03:02:47.880106 | orchestrator | Thursday 09 April 2026 03:02:32 +0000 (0:00:00.728) 0:09:12.968 ******** 2026-04-09 03:02:47.880114 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:02:47.880124 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:02:47.880133 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:02:47.880143 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:02:47.880152 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:02:47.880162 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:02:47.880171 | orchestrator | 2026-04-09 03:02:47.880181 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-09 03:02:47.880190 | orchestrator | Thursday 09 April 2026 03:02:35 +0000 (0:00:02.518) 0:09:15.487 ******** 2026-04-09 03:02:47.880199 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:02:47.880209 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:02:47.880219 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:02:47.880227 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:02:47.880236 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:02:47.880244 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:02:47.880251 | orchestrator | 2026-04-09 03:02:47.880259 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-09 03:02:47.880267 | orchestrator | 2026-04-09 03:02:47.880276 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 03:02:47.880284 | orchestrator | Thursday 09 April 2026 03:02:36 +0000 (0:00:00.947) 0:09:16.435 ******** 2026-04-09 03:02:47.880293 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:02:47.880301 | orchestrator | 2026-04-09 03:02:47.880309 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 03:02:47.880317 | orchestrator | Thursday 09 April 2026 03:02:37 +0000 (0:00:00.946) 0:09:17.381 ******** 2026-04-09 03:02:47.880325 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:02:47.880333 | orchestrator | 2026-04-09 03:02:47.880341 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 03:02:47.880348 | orchestrator | Thursday 09 April 2026 03:02:38 +0000 (0:00:00.895) 0:09:18.277 ******** 2026-04-09 03:02:47.880356 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:02:47.880364 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:02:47.880372 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:02:47.880380 | orchestrator | 2026-04-09 03:02:47.880387 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 03:02:47.880395 | orchestrator | Thursday 09 April 2026 03:02:38 +0000 (0:00:00.380) 0:09:18.657 ******** 2026-04-09 03:02:47.880403 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:02:47.880411 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:02:47.880419 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:02:47.880426 | orchestrator | 2026-04-09 03:02:47.880434 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 03:02:47.880442 | orchestrator | Thursday 09 April 2026 03:02:39 +0000 (0:00:00.778) 0:09:19.436 ******** 2026-04-09 03:02:47.880450 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:02:47.880458 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:02:47.880480 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:02:47.880489 | orchestrator | 2026-04-09 03:02:47.880497 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 03:02:47.880511 | orchestrator | Thursday 09 April 2026 03:02:39 +0000 (0:00:00.754) 0:09:20.191 ******** 2026-04-09 03:02:47.880519 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:02:47.880526 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:02:47.880534 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:02:47.880542 | orchestrator | 2026-04-09 03:02:47.880550 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 03:02:47.880557 | orchestrator | Thursday 09 April 2026 03:02:41 +0000 (0:00:01.083) 0:09:21.274 ******** 2026-04-09 03:02:47.880565 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:02:47.880573 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:02:47.880581 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:02:47.880589 | orchestrator | 2026-04-09 03:02:47.880597 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 03:02:47.880605 | orchestrator | Thursday 09 April 2026 03:02:41 +0000 (0:00:00.356) 0:09:21.630 ******** 2026-04-09 03:02:47.880612 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:02:47.880620 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:02:47.880628 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:02:47.880636 | orchestrator | 2026-04-09 03:02:47.880644 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 03:02:47.880651 | orchestrator | Thursday 09 April 2026 03:02:41 +0000 (0:00:00.370) 0:09:22.001 ******** 2026-04-09 03:02:47.880659 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:02:47.880667 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:02:47.880675 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:02:47.880683 | orchestrator | 2026-04-09 03:02:47.880690 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 03:02:47.880698 | orchestrator | Thursday 09 April 2026 03:02:42 +0000 (0:00:00.331) 0:09:22.333 ******** 2026-04-09 03:02:47.880706 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:02:47.880714 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:02:47.880722 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:02:47.880730 | orchestrator | 2026-04-09 03:02:47.880738 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 03:02:47.880751 | orchestrator | Thursday 09 April 2026 03:02:43 +0000 (0:00:01.035) 0:09:23.368 ******** 2026-04-09 03:02:47.880759 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:02:47.880767 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:02:47.880774 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:02:47.880782 | orchestrator | 2026-04-09 03:02:47.880790 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 03:02:47.880798 | orchestrator | Thursday 09 April 2026 03:02:43 +0000 (0:00:00.786) 0:09:24.155 ******** 2026-04-09 03:02:47.880806 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:02:47.880813 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:02:47.880821 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:02:47.880829 | orchestrator | 2026-04-09 03:02:47.880837 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 03:02:47.880845 | orchestrator | Thursday 09 April 2026 03:02:44 +0000 (0:00:00.357) 0:09:24.512 ******** 2026-04-09 03:02:47.880852 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:02:47.880860 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:02:47.880868 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:02:47.880876 | orchestrator | 2026-04-09 03:02:47.880884 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 03:02:47.880891 | orchestrator | Thursday 09 April 2026 03:02:44 +0000 (0:00:00.348) 0:09:24.860 ******** 2026-04-09 03:02:47.880899 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:02:47.880907 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:02:47.880915 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:02:47.880922 | orchestrator | 2026-04-09 03:02:47.880930 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 03:02:47.880938 | orchestrator | Thursday 09 April 2026 03:02:45 +0000 (0:00:00.674) 0:09:25.534 ******** 2026-04-09 03:02:47.880951 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:02:47.880959 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:02:47.880967 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:02:47.880975 | orchestrator | 2026-04-09 03:02:47.880983 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 03:02:47.880991 | orchestrator | Thursday 09 April 2026 03:02:45 +0000 (0:00:00.382) 0:09:25.917 ******** 2026-04-09 03:02:47.880999 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:02:47.881007 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:02:47.881015 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:02:47.881022 | orchestrator | 2026-04-09 03:02:47.881030 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 03:02:47.881038 | orchestrator | Thursday 09 April 2026 03:02:46 +0000 (0:00:00.355) 0:09:26.273 ******** 2026-04-09 03:02:47.881046 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:02:47.881054 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:02:47.881077 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:02:47.881086 | orchestrator | 2026-04-09 03:02:47.881093 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 03:02:47.881101 | orchestrator | Thursday 09 April 2026 03:02:46 +0000 (0:00:00.331) 0:09:26.604 ******** 2026-04-09 03:02:47.881109 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:02:47.881117 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:02:47.881125 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:02:47.881133 | orchestrator | 2026-04-09 03:02:47.881141 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 03:02:47.881148 | orchestrator | Thursday 09 April 2026 03:02:47 +0000 (0:00:00.685) 0:09:27.290 ******** 2026-04-09 03:02:47.881156 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:02:47.881164 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:02:47.881172 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:02:47.881180 | orchestrator | 2026-04-09 03:02:47.881188 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 03:02:47.881196 | orchestrator | Thursday 09 April 2026 03:02:47 +0000 (0:00:00.384) 0:09:27.674 ******** 2026-04-09 03:02:47.881203 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:02:47.881211 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:02:47.881219 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:02:47.881227 | orchestrator | 2026-04-09 03:02:47.881241 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 03:03:27.430954 | orchestrator | Thursday 09 April 2026 03:02:47 +0000 (0:00:00.397) 0:09:28.072 ******** 2026-04-09 03:03:27.431114 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:03:27.431132 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:03:27.431158 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:03:27.431181 | orchestrator | 2026-04-09 03:03:27.431194 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-09 03:03:27.431205 | orchestrator | Thursday 09 April 2026 03:02:48 +0000 (0:00:00.924) 0:09:28.996 ******** 2026-04-09 03:03:27.431216 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:03:27.431228 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:03:27.431239 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-09 03:03:27.431251 | orchestrator | 2026-04-09 03:03:27.431262 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-09 03:03:27.431273 | orchestrator | Thursday 09 April 2026 03:02:49 +0000 (0:00:00.462) 0:09:29.458 ******** 2026-04-09 03:03:27.431284 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 03:03:27.431295 | orchestrator | 2026-04-09 03:03:27.431306 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-09 03:03:27.431317 | orchestrator | Thursday 09 April 2026 03:02:51 +0000 (0:00:02.162) 0:09:31.621 ******** 2026-04-09 03:03:27.431329 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-09 03:03:27.431367 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:03:27.431379 | orchestrator | 2026-04-09 03:03:27.431391 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-09 03:03:27.431401 | orchestrator | Thursday 09 April 2026 03:02:51 +0000 (0:00:00.264) 0:09:31.886 ******** 2026-04-09 03:03:27.431430 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 03:03:27.431449 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 03:03:27.431461 | orchestrator | 2026-04-09 03:03:27.431472 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-09 03:03:27.431483 | orchestrator | Thursday 09 April 2026 03:02:59 +0000 (0:00:08.205) 0:09:40.091 ******** 2026-04-09 03:03:27.431497 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 03:03:27.431509 | orchestrator | 2026-04-09 03:03:27.431523 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-09 03:03:27.431535 | orchestrator | Thursday 09 April 2026 03:03:03 +0000 (0:00:03.851) 0:09:43.942 ******** 2026-04-09 03:03:27.431548 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:03:27.431562 | orchestrator | 2026-04-09 03:03:27.431574 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-09 03:03:27.431588 | orchestrator | Thursday 09 April 2026 03:03:04 +0000 (0:00:00.965) 0:09:44.908 ******** 2026-04-09 03:03:27.431600 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-09 03:03:27.431613 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-09 03:03:27.431625 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-09 03:03:27.431637 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-09 03:03:27.431650 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-09 03:03:27.431662 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-09 03:03:27.431672 | orchestrator | 2026-04-09 03:03:27.431683 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-09 03:03:27.431693 | orchestrator | Thursday 09 April 2026 03:03:05 +0000 (0:00:01.176) 0:09:46.084 ******** 2026-04-09 03:03:27.431704 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:03:27.431715 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 03:03:27.431725 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 03:03:27.431736 | orchestrator | 2026-04-09 03:03:27.431746 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-09 03:03:27.431757 | orchestrator | Thursday 09 April 2026 03:03:08 +0000 (0:00:02.177) 0:09:48.262 ******** 2026-04-09 03:03:27.431767 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 03:03:27.431779 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 03:03:27.431790 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:03:27.431800 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 03:03:27.431811 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-09 03:03:27.431822 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:03:27.431832 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 03:03:27.431843 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-09 03:03:27.431862 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:03:27.431873 | orchestrator | 2026-04-09 03:03:27.431887 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-09 03:03:27.431926 | orchestrator | Thursday 09 April 2026 03:03:09 +0000 (0:00:01.478) 0:09:49.740 ******** 2026-04-09 03:03:27.431946 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:03:27.431963 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:03:27.431981 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:03:27.431999 | orchestrator | 2026-04-09 03:03:27.432087 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-09 03:03:27.432110 | orchestrator | Thursday 09 April 2026 03:03:12 +0000 (0:00:02.784) 0:09:52.524 ******** 2026-04-09 03:03:27.432122 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:03:27.432133 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:03:27.432143 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:03:27.432154 | orchestrator | 2026-04-09 03:03:27.432164 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-09 03:03:27.432175 | orchestrator | Thursday 09 April 2026 03:03:12 +0000 (0:00:00.311) 0:09:52.835 ******** 2026-04-09 03:03:27.432186 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:03:27.432197 | orchestrator | 2026-04-09 03:03:27.432207 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-09 03:03:27.432218 | orchestrator | Thursday 09 April 2026 03:03:13 +0000 (0:00:00.726) 0:09:53.562 ******** 2026-04-09 03:03:27.432229 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:03:27.432239 | orchestrator | 2026-04-09 03:03:27.432250 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-09 03:03:27.432261 | orchestrator | Thursday 09 April 2026 03:03:13 +0000 (0:00:00.602) 0:09:54.164 ******** 2026-04-09 03:03:27.432271 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:03:27.432282 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:03:27.432293 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:03:27.432303 | orchestrator | 2026-04-09 03:03:27.432314 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-09 03:03:27.432333 | orchestrator | Thursday 09 April 2026 03:03:15 +0000 (0:00:01.176) 0:09:55.341 ******** 2026-04-09 03:03:27.432344 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:03:27.432355 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:03:27.432365 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:03:27.432376 | orchestrator | 2026-04-09 03:03:27.432387 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-09 03:03:27.432397 | orchestrator | Thursday 09 April 2026 03:03:16 +0000 (0:00:01.436) 0:09:56.778 ******** 2026-04-09 03:03:27.432408 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:03:27.432419 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:03:27.432429 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:03:27.432440 | orchestrator | 2026-04-09 03:03:27.432450 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-09 03:03:27.432461 | orchestrator | Thursday 09 April 2026 03:03:18 +0000 (0:00:01.705) 0:09:58.483 ******** 2026-04-09 03:03:27.432471 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:03:27.432482 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:03:27.432493 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:03:27.432503 | orchestrator | 2026-04-09 03:03:27.432514 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-09 03:03:27.432525 | orchestrator | Thursday 09 April 2026 03:03:20 +0000 (0:00:01.950) 0:10:00.434 ******** 2026-04-09 03:03:27.432536 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:03:27.432553 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:03:27.432570 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:03:27.432587 | orchestrator | 2026-04-09 03:03:27.432605 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-09 03:03:27.432636 | orchestrator | Thursday 09 April 2026 03:03:21 +0000 (0:00:01.746) 0:10:02.180 ******** 2026-04-09 03:03:27.432654 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:03:27.432671 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:03:27.432689 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:03:27.432707 | orchestrator | 2026-04-09 03:03:27.432724 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-09 03:03:27.432742 | orchestrator | Thursday 09 April 2026 03:03:22 +0000 (0:00:00.838) 0:10:03.019 ******** 2026-04-09 03:03:27.432759 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:03:27.432777 | orchestrator | 2026-04-09 03:03:27.432796 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-09 03:03:27.432809 | orchestrator | Thursday 09 April 2026 03:03:23 +0000 (0:00:00.907) 0:10:03.927 ******** 2026-04-09 03:03:27.432820 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:03:27.432831 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:03:27.432841 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:03:27.432852 | orchestrator | 2026-04-09 03:03:27.432863 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-09 03:03:27.432873 | orchestrator | Thursday 09 April 2026 03:03:24 +0000 (0:00:00.387) 0:10:04.315 ******** 2026-04-09 03:03:27.432884 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:03:27.432894 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:03:27.432905 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:03:27.432916 | orchestrator | 2026-04-09 03:03:27.432926 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-09 03:03:27.432937 | orchestrator | Thursday 09 April 2026 03:03:25 +0000 (0:00:01.259) 0:10:05.574 ******** 2026-04-09 03:03:27.432947 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 03:03:27.432959 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 03:03:27.432969 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 03:03:27.432980 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:03:27.432991 | orchestrator | 2026-04-09 03:03:27.433001 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-09 03:03:27.433012 | orchestrator | Thursday 09 April 2026 03:03:26 +0000 (0:00:01.044) 0:10:06.619 ******** 2026-04-09 03:03:27.433057 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:03:27.433068 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:03:27.433091 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:03:46.959532 | orchestrator | 2026-04-09 03:03:46.960419 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-09 03:03:46.960474 | orchestrator | 2026-04-09 03:03:46.960487 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 03:03:46.960498 | orchestrator | Thursday 09 April 2026 03:03:27 +0000 (0:00:01.004) 0:10:07.623 ******** 2026-04-09 03:03:46.960510 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:03:46.960523 | orchestrator | 2026-04-09 03:03:46.960534 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 03:03:46.960544 | orchestrator | Thursday 09 April 2026 03:03:28 +0000 (0:00:00.636) 0:10:08.259 ******** 2026-04-09 03:03:46.960555 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:03:46.960566 | orchestrator | 2026-04-09 03:03:46.960577 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 03:03:46.960593 | orchestrator | Thursday 09 April 2026 03:03:28 +0000 (0:00:00.893) 0:10:09.152 ******** 2026-04-09 03:03:46.960611 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:03:46.960632 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:03:46.960650 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:03:46.960728 | orchestrator | 2026-04-09 03:03:46.960741 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 03:03:46.960752 | orchestrator | Thursday 09 April 2026 03:03:29 +0000 (0:00:00.386) 0:10:09.539 ******** 2026-04-09 03:03:46.960763 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:03:46.960775 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:03:46.960785 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:03:46.960796 | orchestrator | 2026-04-09 03:03:46.960807 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 03:03:46.960818 | orchestrator | Thursday 09 April 2026 03:03:30 +0000 (0:00:00.769) 0:10:10.309 ******** 2026-04-09 03:03:46.960829 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:03:46.960854 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:03:46.960874 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:03:46.960898 | orchestrator | 2026-04-09 03:03:46.960925 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 03:03:46.960945 | orchestrator | Thursday 09 April 2026 03:03:31 +0000 (0:00:01.209) 0:10:11.518 ******** 2026-04-09 03:03:46.960964 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:03:46.960983 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:03:46.961059 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:03:46.961079 | orchestrator | 2026-04-09 03:03:46.961099 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 03:03:46.961121 | orchestrator | Thursday 09 April 2026 03:03:32 +0000 (0:00:00.824) 0:10:12.343 ******** 2026-04-09 03:03:46.961143 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:03:46.961163 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:03:46.961180 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:03:46.961191 | orchestrator | 2026-04-09 03:03:46.961202 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 03:03:46.961213 | orchestrator | Thursday 09 April 2026 03:03:32 +0000 (0:00:00.371) 0:10:12.715 ******** 2026-04-09 03:03:46.961225 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:03:46.961239 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:03:46.961256 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:03:46.961273 | orchestrator | 2026-04-09 03:03:46.961285 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 03:03:46.961296 | orchestrator | Thursday 09 April 2026 03:03:32 +0000 (0:00:00.338) 0:10:13.053 ******** 2026-04-09 03:03:46.961307 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:03:46.961317 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:03:46.961328 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:03:46.961339 | orchestrator | 2026-04-09 03:03:46.961350 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 03:03:46.961360 | orchestrator | Thursday 09 April 2026 03:03:33 +0000 (0:00:00.637) 0:10:13.690 ******** 2026-04-09 03:03:46.961371 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:03:46.961382 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:03:46.961393 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:03:46.961404 | orchestrator | 2026-04-09 03:03:46.961415 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 03:03:46.961442 | orchestrator | Thursday 09 April 2026 03:03:34 +0000 (0:00:00.802) 0:10:14.493 ******** 2026-04-09 03:03:46.961464 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:03:46.961476 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:03:46.961486 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:03:46.961497 | orchestrator | 2026-04-09 03:03:46.961508 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 03:03:46.961519 | orchestrator | Thursday 09 April 2026 03:03:35 +0000 (0:00:00.776) 0:10:15.269 ******** 2026-04-09 03:03:46.961530 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:03:46.961541 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:03:46.961551 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:03:46.961562 | orchestrator | 2026-04-09 03:03:46.961573 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 03:03:46.961598 | orchestrator | Thursday 09 April 2026 03:03:35 +0000 (0:00:00.374) 0:10:15.644 ******** 2026-04-09 03:03:46.961609 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:03:46.961620 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:03:46.961632 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:03:46.961642 | orchestrator | 2026-04-09 03:03:46.961653 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 03:03:46.961664 | orchestrator | Thursday 09 April 2026 03:03:36 +0000 (0:00:00.670) 0:10:16.314 ******** 2026-04-09 03:03:46.961675 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:03:46.961686 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:03:46.961697 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:03:46.961707 | orchestrator | 2026-04-09 03:03:46.961718 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 03:03:46.961729 | orchestrator | Thursday 09 April 2026 03:03:36 +0000 (0:00:00.422) 0:10:16.737 ******** 2026-04-09 03:03:46.961740 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:03:46.961777 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:03:46.961788 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:03:46.961799 | orchestrator | 2026-04-09 03:03:46.961810 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 03:03:46.961821 | orchestrator | Thursday 09 April 2026 03:03:36 +0000 (0:00:00.417) 0:10:17.155 ******** 2026-04-09 03:03:46.961832 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:03:46.961843 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:03:46.961854 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:03:46.961865 | orchestrator | 2026-04-09 03:03:46.961876 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 03:03:46.961887 | orchestrator | Thursday 09 April 2026 03:03:37 +0000 (0:00:00.377) 0:10:17.533 ******** 2026-04-09 03:03:46.961898 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:03:46.961909 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:03:46.961920 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:03:46.961931 | orchestrator | 2026-04-09 03:03:46.961941 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 03:03:46.961952 | orchestrator | Thursday 09 April 2026 03:03:38 +0000 (0:00:00.676) 0:10:18.209 ******** 2026-04-09 03:03:46.961963 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:03:46.961974 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:03:46.961985 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:03:46.962085 | orchestrator | 2026-04-09 03:03:46.962097 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 03:03:46.962108 | orchestrator | Thursday 09 April 2026 03:03:38 +0000 (0:00:00.364) 0:10:18.573 ******** 2026-04-09 03:03:46.962118 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:03:46.962129 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:03:46.962140 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:03:46.962151 | orchestrator | 2026-04-09 03:03:46.962161 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 03:03:46.962172 | orchestrator | Thursday 09 April 2026 03:03:38 +0000 (0:00:00.373) 0:10:18.946 ******** 2026-04-09 03:03:46.962183 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:03:46.962193 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:03:46.962204 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:03:46.962215 | orchestrator | 2026-04-09 03:03:46.962235 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 03:03:46.962246 | orchestrator | Thursday 09 April 2026 03:03:39 +0000 (0:00:00.389) 0:10:19.335 ******** 2026-04-09 03:03:46.962257 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:03:46.962268 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:03:46.962278 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:03:46.962289 | orchestrator | 2026-04-09 03:03:46.962300 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-09 03:03:46.962310 | orchestrator | Thursday 09 April 2026 03:03:40 +0000 (0:00:00.918) 0:10:20.254 ******** 2026-04-09 03:03:46.962331 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:03:46.962342 | orchestrator | 2026-04-09 03:03:46.962353 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-09 03:03:46.962364 | orchestrator | Thursday 09 April 2026 03:03:40 +0000 (0:00:00.615) 0:10:20.869 ******** 2026-04-09 03:03:46.962375 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:03:46.962386 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 03:03:46.962397 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 03:03:46.962408 | orchestrator | 2026-04-09 03:03:46.962419 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-09 03:03:46.962429 | orchestrator | Thursday 09 April 2026 03:03:43 +0000 (0:00:02.579) 0:10:23.449 ******** 2026-04-09 03:03:46.962440 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 03:03:46.962451 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 03:03:46.962462 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:03:46.962473 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 03:03:46.962483 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-09 03:03:46.962494 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:03:46.962505 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 03:03:46.962515 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-09 03:03:46.962526 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:03:46.962537 | orchestrator | 2026-04-09 03:03:46.962547 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-09 03:03:46.962558 | orchestrator | Thursday 09 April 2026 03:03:44 +0000 (0:00:01.541) 0:10:24.991 ******** 2026-04-09 03:03:46.962569 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:03:46.962580 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:03:46.962590 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:03:46.962601 | orchestrator | 2026-04-09 03:03:46.962612 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-09 03:03:46.962623 | orchestrator | Thursday 09 April 2026 03:03:45 +0000 (0:00:00.389) 0:10:25.380 ******** 2026-04-09 03:03:46.962633 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:03:46.962645 | orchestrator | 2026-04-09 03:03:46.962656 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-09 03:03:46.962666 | orchestrator | Thursday 09 April 2026 03:03:46 +0000 (0:00:00.901) 0:10:26.281 ******** 2026-04-09 03:03:46.962678 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 03:03:46.962691 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 03:03:46.962713 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 03:04:39.299386 | orchestrator | 2026-04-09 03:04:39.299467 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-09 03:04:39.299475 | orchestrator | Thursday 09 April 2026 03:03:46 +0000 (0:00:00.867) 0:10:27.149 ******** 2026-04-09 03:04:39.299480 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:04:39.299486 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-09 03:04:39.299491 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:04:39.299496 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-09 03:04:39.299515 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:04:39.299519 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-09 03:04:39.299523 | orchestrator | 2026-04-09 03:04:39.299528 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-09 03:04:39.299532 | orchestrator | Thursday 09 April 2026 03:03:51 +0000 (0:00:04.498) 0:10:31.647 ******** 2026-04-09 03:04:39.299536 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:04:39.299541 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 03:04:39.299546 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:04:39.299550 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 03:04:39.299564 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:04:39.299568 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 03:04:39.299573 | orchestrator | 2026-04-09 03:04:39.299577 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-09 03:04:39.299581 | orchestrator | Thursday 09 April 2026 03:03:53 +0000 (0:00:02.528) 0:10:34.176 ******** 2026-04-09 03:04:39.299586 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 03:04:39.299591 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:04:39.299595 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 03:04:39.299599 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:04:39.299603 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 03:04:39.299607 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:04:39.299611 | orchestrator | 2026-04-09 03:04:39.299615 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-09 03:04:39.299619 | orchestrator | Thursday 09 April 2026 03:03:55 +0000 (0:00:01.582) 0:10:35.759 ******** 2026-04-09 03:04:39.299623 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-09 03:04:39.299627 | orchestrator | 2026-04-09 03:04:39.299630 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-09 03:04:39.299634 | orchestrator | Thursday 09 April 2026 03:03:55 +0000 (0:00:00.249) 0:10:36.009 ******** 2026-04-09 03:04:39.299638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 03:04:39.299642 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 03:04:39.299646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 03:04:39.299650 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 03:04:39.299654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 03:04:39.299658 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:04:39.299662 | orchestrator | 2026-04-09 03:04:39.299666 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-09 03:04:39.299669 | orchestrator | Thursday 09 April 2026 03:03:56 +0000 (0:00:00.700) 0:10:36.710 ******** 2026-04-09 03:04:39.299673 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 03:04:39.299677 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 03:04:39.299681 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 03:04:39.299689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 03:04:39.299693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 03:04:39.299696 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:04:39.299701 | orchestrator | 2026-04-09 03:04:39.299704 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-09 03:04:39.299708 | orchestrator | Thursday 09 April 2026 03:03:57 +0000 (0:00:00.679) 0:10:37.390 ******** 2026-04-09 03:04:39.299722 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 03:04:39.299728 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 03:04:39.299731 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 03:04:39.299735 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 03:04:39.299739 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 03:04:39.299743 | orchestrator | 2026-04-09 03:04:39.299747 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-09 03:04:39.299751 | orchestrator | Thursday 09 April 2026 03:04:27 +0000 (0:00:30.732) 0:11:08.122 ******** 2026-04-09 03:04:39.299754 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:04:39.299758 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:04:39.299762 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:04:39.299766 | orchestrator | 2026-04-09 03:04:39.299769 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-09 03:04:39.299773 | orchestrator | Thursday 09 April 2026 03:04:28 +0000 (0:00:00.350) 0:11:08.472 ******** 2026-04-09 03:04:39.299777 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:04:39.299781 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:04:39.299787 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:04:39.299791 | orchestrator | 2026-04-09 03:04:39.299795 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-09 03:04:39.299799 | orchestrator | Thursday 09 April 2026 03:04:28 +0000 (0:00:00.346) 0:11:08.819 ******** 2026-04-09 03:04:39.299802 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:04:39.299806 | orchestrator | 2026-04-09 03:04:39.299810 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-09 03:04:39.299814 | orchestrator | Thursday 09 April 2026 03:04:29 +0000 (0:00:00.961) 0:11:09.780 ******** 2026-04-09 03:04:39.299817 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:04:39.299821 | orchestrator | 2026-04-09 03:04:39.299825 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-09 03:04:39.299829 | orchestrator | Thursday 09 April 2026 03:04:30 +0000 (0:00:00.881) 0:11:10.661 ******** 2026-04-09 03:04:39.299833 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:04:39.299837 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:04:39.299840 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:04:39.299844 | orchestrator | 2026-04-09 03:04:39.299848 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-09 03:04:39.299852 | orchestrator | Thursday 09 April 2026 03:04:31 +0000 (0:00:01.315) 0:11:11.977 ******** 2026-04-09 03:04:39.299859 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:04:39.299863 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:04:39.299867 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:04:39.299871 | orchestrator | 2026-04-09 03:04:39.299874 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-09 03:04:39.299878 | orchestrator | Thursday 09 April 2026 03:04:32 +0000 (0:00:01.177) 0:11:13.155 ******** 2026-04-09 03:04:39.299882 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:04:39.299886 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:04:39.299889 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:04:39.299893 | orchestrator | 2026-04-09 03:04:39.299897 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-09 03:04:39.299901 | orchestrator | Thursday 09 April 2026 03:04:34 +0000 (0:00:01.824) 0:11:14.979 ******** 2026-04-09 03:04:39.299904 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 03:04:39.299908 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 03:04:39.299912 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 03:04:39.299940 | orchestrator | 2026-04-09 03:04:39.299946 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-09 03:04:39.299952 | orchestrator | Thursday 09 April 2026 03:04:37 +0000 (0:00:02.898) 0:11:17.877 ******** 2026-04-09 03:04:39.299958 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:04:39.299965 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:04:39.299970 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:04:39.299974 | orchestrator | 2026-04-09 03:04:39.299979 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-09 03:04:39.299983 | orchestrator | Thursday 09 April 2026 03:04:38 +0000 (0:00:00.395) 0:11:18.272 ******** 2026-04-09 03:04:39.299988 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:04:39.299993 | orchestrator | 2026-04-09 03:04:39.299997 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-09 03:04:39.300001 | orchestrator | Thursday 09 April 2026 03:04:39 +0000 (0:00:00.981) 0:11:19.254 ******** 2026-04-09 03:04:39.300008 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:04:42.040895 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:04:42.041077 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:04:42.041097 | orchestrator | 2026-04-09 03:04:42.041112 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-09 03:04:42.041121 | orchestrator | Thursday 09 April 2026 03:04:39 +0000 (0:00:00.406) 0:11:19.660 ******** 2026-04-09 03:04:42.041128 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:04:42.041136 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:04:42.041143 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:04:42.041149 | orchestrator | 2026-04-09 03:04:42.041157 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-09 03:04:42.041164 | orchestrator | Thursday 09 April 2026 03:04:39 +0000 (0:00:00.376) 0:11:20.037 ******** 2026-04-09 03:04:42.041171 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 03:04:42.041179 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 03:04:42.041190 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 03:04:42.041201 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:04:42.041211 | orchestrator | 2026-04-09 03:04:42.041220 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-09 03:04:42.041232 | orchestrator | Thursday 09 April 2026 03:04:40 +0000 (0:00:01.094) 0:11:21.132 ******** 2026-04-09 03:04:42.041243 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:04:42.041249 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:04:42.041279 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:04:42.041285 | orchestrator | 2026-04-09 03:04:42.041292 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 03:04:42.041301 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-04-09 03:04:42.041324 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-09 03:04:42.041331 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-09 03:04:42.041337 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-04-09 03:04:42.041344 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-09 03:04:42.041351 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-09 03:04:42.041357 | orchestrator | 2026-04-09 03:04:42.041364 | orchestrator | 2026-04-09 03:04:42.041370 | orchestrator | 2026-04-09 03:04:42.041377 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 03:04:42.041384 | orchestrator | Thursday 09 April 2026 03:04:41 +0000 (0:00:00.586) 0:11:21.718 ******** 2026-04-09 03:04:42.041391 | orchestrator | =============================================================================== 2026-04-09 03:04:42.041397 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 64.81s 2026-04-09 03:04:42.041404 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 43.40s 2026-04-09 03:04:42.041410 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.73s 2026-04-09 03:04:42.041417 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.30s 2026-04-09 03:04:42.041424 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.94s 2026-04-09 03:04:42.041430 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.91s 2026-04-09 03:04:42.041437 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.89s 2026-04-09 03:04:42.041444 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.77s 2026-04-09 03:04:42.041450 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.32s 2026-04-09 03:04:42.041457 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.21s 2026-04-09 03:04:42.041463 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.61s 2026-04-09 03:04:42.041470 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.44s 2026-04-09 03:04:42.041477 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.19s 2026-04-09 03:04:42.041483 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.50s 2026-04-09 03:04:42.041490 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 4.48s 2026-04-09 03:04:42.041497 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.20s 2026-04-09 03:04:42.041504 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.85s 2026-04-09 03:04:42.041510 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.69s 2026-04-09 03:04:42.041517 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.40s 2026-04-09 03:04:42.041523 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.19s 2026-04-09 03:04:44.768253 | orchestrator | 2026-04-09 03:04:44 | INFO  | Task d7f10599-ec13-43be-b17b-dd631f604659 (ceph-pools) was prepared for execution. 2026-04-09 03:04:44.768363 | orchestrator | 2026-04-09 03:04:44 | INFO  | It takes a moment until task d7f10599-ec13-43be-b17b-dd631f604659 (ceph-pools) has been started and output is visible here. 2026-04-09 03:05:00.322755 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-09 03:05:00.322951 | orchestrator | 2.16.14 2026-04-09 03:05:00.322966 | orchestrator | 2026-04-09 03:05:00.322974 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-09 03:05:00.322983 | orchestrator | 2026-04-09 03:05:00.322990 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 03:05:00.322997 | orchestrator | Thursday 09 April 2026 03:04:49 +0000 (0:00:00.665) 0:00:00.665 ******** 2026-04-09 03:05:00.323003 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:05:00.323011 | orchestrator | 2026-04-09 03:05:00.323018 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-09 03:05:00.323025 | orchestrator | Thursday 09 April 2026 03:04:50 +0000 (0:00:00.756) 0:00:01.422 ******** 2026-04-09 03:05:00.323031 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:05:00.323039 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:05:00.323045 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:05:00.323051 | orchestrator | 2026-04-09 03:05:00.323057 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-09 03:05:00.323064 | orchestrator | Thursday 09 April 2026 03:04:51 +0000 (0:00:00.675) 0:00:02.097 ******** 2026-04-09 03:05:00.323070 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:05:00.323076 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:05:00.323082 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:05:00.323089 | orchestrator | 2026-04-09 03:05:00.323095 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 03:05:00.323101 | orchestrator | Thursday 09 April 2026 03:04:51 +0000 (0:00:00.334) 0:00:02.432 ******** 2026-04-09 03:05:00.323107 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:05:00.323114 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:05:00.323120 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:05:00.323126 | orchestrator | 2026-04-09 03:05:00.323149 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 03:05:00.323155 | orchestrator | Thursday 09 April 2026 03:04:52 +0000 (0:00:00.891) 0:00:03.323 ******** 2026-04-09 03:05:00.323162 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:05:00.323168 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:05:00.323174 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:05:00.323180 | orchestrator | 2026-04-09 03:05:00.323187 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-09 03:05:00.323193 | orchestrator | Thursday 09 April 2026 03:04:52 +0000 (0:00:00.360) 0:00:03.684 ******** 2026-04-09 03:05:00.323199 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:05:00.323205 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:05:00.323212 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:05:00.323222 | orchestrator | 2026-04-09 03:05:00.323234 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-09 03:05:00.323246 | orchestrator | Thursday 09 April 2026 03:04:53 +0000 (0:00:00.316) 0:00:04.001 ******** 2026-04-09 03:05:00.323257 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:05:00.323269 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:05:00.323280 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:05:00.323292 | orchestrator | 2026-04-09 03:05:00.323305 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-09 03:05:00.323317 | orchestrator | Thursday 09 April 2026 03:04:53 +0000 (0:00:00.362) 0:00:04.364 ******** 2026-04-09 03:05:00.323329 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:00.323342 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:05:00.323353 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:05:00.323364 | orchestrator | 2026-04-09 03:05:00.323375 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-09 03:05:00.323418 | orchestrator | Thursday 09 April 2026 03:04:54 +0000 (0:00:00.602) 0:00:04.966 ******** 2026-04-09 03:05:00.323431 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:05:00.323443 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:05:00.323456 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:05:00.323465 | orchestrator | 2026-04-09 03:05:00.323473 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-09 03:05:00.323481 | orchestrator | Thursday 09 April 2026 03:04:54 +0000 (0:00:00.335) 0:00:05.301 ******** 2026-04-09 03:05:00.323489 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 03:05:00.323495 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 03:05:00.323501 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 03:05:00.323507 | orchestrator | 2026-04-09 03:05:00.323514 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-09 03:05:00.323520 | orchestrator | Thursday 09 April 2026 03:04:55 +0000 (0:00:00.762) 0:00:06.064 ******** 2026-04-09 03:05:00.323526 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:05:00.323532 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:05:00.323538 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:05:00.323544 | orchestrator | 2026-04-09 03:05:00.323550 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-09 03:05:00.323556 | orchestrator | Thursday 09 April 2026 03:04:55 +0000 (0:00:00.469) 0:00:06.533 ******** 2026-04-09 03:05:00.323562 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 03:05:00.323569 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 03:05:00.323575 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 03:05:00.323581 | orchestrator | 2026-04-09 03:05:00.323587 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-09 03:05:00.323593 | orchestrator | Thursday 09 April 2026 03:04:58 +0000 (0:00:02.312) 0:00:08.846 ******** 2026-04-09 03:05:00.323600 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-09 03:05:00.323607 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-09 03:05:00.323613 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-09 03:05:00.323620 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:00.323626 | orchestrator | 2026-04-09 03:05:00.323651 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-09 03:05:00.323658 | orchestrator | Thursday 09 April 2026 03:04:58 +0000 (0:00:00.745) 0:00:09.591 ******** 2026-04-09 03:05:00.323667 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 03:05:00.323676 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 03:05:00.323683 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 03:05:00.323689 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:00.323695 | orchestrator | 2026-04-09 03:05:00.323705 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-09 03:05:00.323715 | orchestrator | Thursday 09 April 2026 03:04:59 +0000 (0:00:01.157) 0:00:10.749 ******** 2026-04-09 03:05:00.323736 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:00.323759 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:00.323770 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:00.323776 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:00.323782 | orchestrator | 2026-04-09 03:05:00.323789 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-09 03:05:00.323795 | orchestrator | Thursday 09 April 2026 03:05:00 +0000 (0:00:00.186) 0:00:10.935 ******** 2026-04-09 03:05:00.323804 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '3b46de499f20', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-09 03:04:56.652505', 'end': '2026-04-09 03:04:56.705110', 'delta': '0:00:00.052605', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3b46de499f20'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-09 03:05:00.323817 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '344b9fc03006', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-09 03:04:57.242289', 'end': '2026-04-09 03:04:57.294802', 'delta': '0:00:00.052513', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['344b9fc03006'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-09 03:05:00.323830 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '66330ed4242e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-09 03:04:57.827136', 'end': '2026-04-09 03:04:57.866085', 'delta': '0:00:00.038949', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['66330ed4242e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-09 03:05:08.065137 | orchestrator | 2026-04-09 03:05:08.065236 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-09 03:05:08.065248 | orchestrator | Thursday 09 April 2026 03:05:00 +0000 (0:00:00.214) 0:00:11.150 ******** 2026-04-09 03:05:08.065272 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:05:08.065280 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:05:08.065286 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:05:08.065292 | orchestrator | 2026-04-09 03:05:08.065299 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-09 03:05:08.065305 | orchestrator | Thursday 09 April 2026 03:05:00 +0000 (0:00:00.524) 0:00:11.675 ******** 2026-04-09 03:05:08.065312 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-09 03:05:08.065319 | orchestrator | 2026-04-09 03:05:08.065338 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-09 03:05:08.065345 | orchestrator | Thursday 09 April 2026 03:05:02 +0000 (0:00:01.712) 0:00:13.387 ******** 2026-04-09 03:05:08.065352 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:08.065358 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:05:08.065364 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:05:08.065370 | orchestrator | 2026-04-09 03:05:08.065377 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-09 03:05:08.065383 | orchestrator | Thursday 09 April 2026 03:05:02 +0000 (0:00:00.345) 0:00:13.732 ******** 2026-04-09 03:05:08.065389 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:08.065396 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:05:08.065402 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:05:08.065408 | orchestrator | 2026-04-09 03:05:08.065414 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 03:05:08.065421 | orchestrator | Thursday 09 April 2026 03:05:03 +0000 (0:00:00.937) 0:00:14.669 ******** 2026-04-09 03:05:08.065427 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:08.065433 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:05:08.065439 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:05:08.065446 | orchestrator | 2026-04-09 03:05:08.065453 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-09 03:05:08.065459 | orchestrator | Thursday 09 April 2026 03:05:04 +0000 (0:00:00.349) 0:00:15.018 ******** 2026-04-09 03:05:08.065465 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:05:08.065471 | orchestrator | 2026-04-09 03:05:08.065478 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-09 03:05:08.065484 | orchestrator | Thursday 09 April 2026 03:05:04 +0000 (0:00:00.155) 0:00:15.174 ******** 2026-04-09 03:05:08.065490 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:08.065497 | orchestrator | 2026-04-09 03:05:08.065503 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 03:05:08.065509 | orchestrator | Thursday 09 April 2026 03:05:04 +0000 (0:00:00.274) 0:00:15.449 ******** 2026-04-09 03:05:08.065515 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:08.065521 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:05:08.065528 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:05:08.065534 | orchestrator | 2026-04-09 03:05:08.065540 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-09 03:05:08.065546 | orchestrator | Thursday 09 April 2026 03:05:04 +0000 (0:00:00.338) 0:00:15.788 ******** 2026-04-09 03:05:08.065552 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:08.065559 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:05:08.065565 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:05:08.065571 | orchestrator | 2026-04-09 03:05:08.065577 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-09 03:05:08.065584 | orchestrator | Thursday 09 April 2026 03:05:05 +0000 (0:00:00.368) 0:00:16.156 ******** 2026-04-09 03:05:08.065590 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:08.065596 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:05:08.065602 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:05:08.065609 | orchestrator | 2026-04-09 03:05:08.065615 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-09 03:05:08.065621 | orchestrator | Thursday 09 April 2026 03:05:05 +0000 (0:00:00.614) 0:00:16.770 ******** 2026-04-09 03:05:08.065632 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:08.065639 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:05:08.065645 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:05:08.065651 | orchestrator | 2026-04-09 03:05:08.065658 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-09 03:05:08.065664 | orchestrator | Thursday 09 April 2026 03:05:06 +0000 (0:00:00.369) 0:00:17.140 ******** 2026-04-09 03:05:08.065673 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:08.065681 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:05:08.065688 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:05:08.065695 | orchestrator | 2026-04-09 03:05:08.065703 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-09 03:05:08.065711 | orchestrator | Thursday 09 April 2026 03:05:06 +0000 (0:00:00.380) 0:00:17.520 ******** 2026-04-09 03:05:08.065719 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:08.065727 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:05:08.065734 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:05:08.065742 | orchestrator | 2026-04-09 03:05:08.065750 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-09 03:05:08.065758 | orchestrator | Thursday 09 April 2026 03:05:07 +0000 (0:00:00.704) 0:00:18.224 ******** 2026-04-09 03:05:08.065765 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:08.065773 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:05:08.065780 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:05:08.065787 | orchestrator | 2026-04-09 03:05:08.065795 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-09 03:05:08.065803 | orchestrator | Thursday 09 April 2026 03:05:07 +0000 (0:00:00.389) 0:00:18.614 ******** 2026-04-09 03:05:08.065832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f59a7c8--f88e--51a3--9620--37640e0ff9b5-osd--block--2f59a7c8--f88e--51a3--9620--37640e0ff9b5', 'dm-uuid-LVM-oNDzr1Rndp1i5vNhITRGHxSNPadq9yP2YYkPY1YV0VAfbaBkscBIapC0b1XJToPw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.065855 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1db77c01--2d77--5e1e--8d0a--4e535706b141-osd--block--1db77c01--2d77--5e1e--8d0a--4e535706b141', 'dm-uuid-LVM-Vd5rxgKUs73TU4Fbsf9r5IJx2JqOBc0dUhLn892OVFegRXfoAocTUnq1hUBwAqbQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.065899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.065913 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.065932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.065942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.065953 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.065964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.065974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.065996 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.114619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part1', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part14', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part15', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part16', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 03:05:08.114706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2f59a7c8--f88e--51a3--9620--37640e0ff9b5-osd--block--2f59a7c8--f88e--51a3--9620--37640e0ff9b5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-e4gHGq-azk6-pcuI-7Nw2-ZeJR-MqdR-RoIdn8', 'scsi-0QEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761', 'scsi-SQEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 03:05:08.114716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1db77c01--2d77--5e1e--8d0a--4e535706b141-osd--block--1db77c01--2d77--5e1e--8d0a--4e535706b141'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0XRV4m-9HLY-z9SL-9jLS-42la-BMLC-ZGW5lb', 'scsi-0QEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad', 'scsi-SQEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 03:05:08.114733 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be', 'scsi-SQEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 03:05:08.114743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-01-39-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 03:05:08.114748 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--68e90870--4763--57e7--8e76--63c40a6d6d6f-osd--block--68e90870--4763--57e7--8e76--63c40a6d6d6f', 'dm-uuid-LVM-T0ejETzGFepEIpL08MD1OeW49dsnvI7wKSROHCdR1c0EeKRnCg2dSklYcbDl2pDV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.114759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9961abb4--5e3b--57c6--b852--cf206941d3b6-osd--block--9961abb4--5e3b--57c6--b852--cf206941d3b6', 'dm-uuid-LVM-VSFf3ccEgp0wKL5jhz806Y1ipsT5ObrjT6G1Wcak0n2a6HcA1XF9dc3OSaf3QiXy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.114764 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.114771 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.114776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.114781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.114788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.228907 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.228991 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:08.229000 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.229005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.229025 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part1', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part14', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part15', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part16', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 03:05:08.229043 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--68e90870--4763--57e7--8e76--63c40a6d6d6f-osd--block--68e90870--4763--57e7--8e76--63c40a6d6d6f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HoNasn-5QxW-KcVM-fZxm-N33I-s5zp-OiEUUv', 'scsi-0QEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74', 'scsi-SQEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 03:05:08.229054 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9961abb4--5e3b--57c6--b852--cf206941d3b6-osd--block--9961abb4--5e3b--57c6--b852--cf206941d3b6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Q1Gc3L-FdZu-zuWJ-nsAv-cjJy-ZUop-F84tN5', 'scsi-0QEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf', 'scsi-SQEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 03:05:08.229062 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105', 'scsi-SQEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 03:05:08.229067 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-01-39-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 03:05:08.229073 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:05:08.229077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--27c4b53f--c2bf--5253--84b2--9319684e0f9e-osd--block--27c4b53f--c2bf--5253--84b2--9319684e0f9e', 'dm-uuid-LVM-Pc55YnmQ0VktCZR8gyBjVYB68xFeQ7SyIBYDKlrKL18ENKM1lOFBMzNh1V9LMTjE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.229082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6-osd--block--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6', 'dm-uuid-LVM-9vxFUtTAtak5ZkuWrKKfGIkUHDKTWX7JDm3eA8Y5r9JAYPgNze2tzfH9lV9upfLN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.229086 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.229094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.556479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.556581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.556626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.556639 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.556650 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.556661 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 03:05:08.556706 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part1', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part14', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part15', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part16', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 03:05:08.556732 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--27c4b53f--c2bf--5253--84b2--9319684e0f9e-osd--block--27c4b53f--c2bf--5253--84b2--9319684e0f9e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mluLyS-UGtI-41vG-BPyx-ooVb-U8x0-Mwltl0', 'scsi-0QEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e', 'scsi-SQEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 03:05:08.556746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6-osd--block--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BrMDw9-eTd2-RE46-4U0W-jzmp-yuNA-1uTxr3', 'scsi-0QEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4', 'scsi-SQEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 03:05:08.556758 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d', 'scsi-SQEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 03:05:08.556770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-01-39-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 03:05:08.556783 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:05:08.556796 | orchestrator | 2026-04-09 03:05:08.556808 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-09 03:05:08.556821 | orchestrator | Thursday 09 April 2026 03:05:08 +0000 (0:00:00.667) 0:00:19.281 ******** 2026-04-09 03:05:08.556851 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f59a7c8--f88e--51a3--9620--37640e0ff9b5-osd--block--2f59a7c8--f88e--51a3--9620--37640e0ff9b5', 'dm-uuid-LVM-oNDzr1Rndp1i5vNhITRGHxSNPadq9yP2YYkPY1YV0VAfbaBkscBIapC0b1XJToPw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:08.710360 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1db77c01--2d77--5e1e--8d0a--4e535706b141-osd--block--1db77c01--2d77--5e1e--8d0a--4e535706b141', 'dm-uuid-LVM-Vd5rxgKUs73TU4Fbsf9r5IJx2JqOBc0dUhLn892OVFegRXfoAocTUnq1hUBwAqbQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:08.710461 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:08.710475 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:08.710490 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:08.710504 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:08.710518 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:08.710593 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:08.710610 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:08.710622 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--68e90870--4763--57e7--8e76--63c40a6d6d6f-osd--block--68e90870--4763--57e7--8e76--63c40a6d6d6f', 'dm-uuid-LVM-T0ejETzGFepEIpL08MD1OeW49dsnvI7wKSROHCdR1c0EeKRnCg2dSklYcbDl2pDV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:08.710636 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:08.710671 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part1', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part14', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part15', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part16', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:08.890534 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9961abb4--5e3b--57c6--b852--cf206941d3b6-osd--block--9961abb4--5e3b--57c6--b852--cf206941d3b6', 'dm-uuid-LVM-VSFf3ccEgp0wKL5jhz806Y1ipsT5ObrjT6G1Wcak0n2a6HcA1XF9dc3OSaf3QiXy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:08.890619 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2f59a7c8--f88e--51a3--9620--37640e0ff9b5-osd--block--2f59a7c8--f88e--51a3--9620--37640e0ff9b5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-e4gHGq-azk6-pcuI-7Nw2-ZeJR-MqdR-RoIdn8', 'scsi-0QEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761', 'scsi-SQEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:08.890629 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:08.890637 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1db77c01--2d77--5e1e--8d0a--4e535706b141-osd--block--1db77c01--2d77--5e1e--8d0a--4e535706b141'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0XRV4m-9HLY-z9SL-9jLS-42la-BMLC-ZGW5lb', 'scsi-0QEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad', 'scsi-SQEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:08.890679 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:08.890703 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be', 'scsi-SQEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:08.890710 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:08.890718 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-01-39-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:08.890725 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:08.890732 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:08.890747 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:08.890760 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:09.033750 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:09.033852 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:09.033912 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part1', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part14', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part15', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part16', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:09.033963 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--68e90870--4763--57e7--8e76--63c40a6d6d6f-osd--block--68e90870--4763--57e7--8e76--63c40a6d6d6f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HoNasn-5QxW-KcVM-fZxm-N33I-s5zp-OiEUUv', 'scsi-0QEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74', 'scsi-SQEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:09.033988 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9961abb4--5e3b--57c6--b852--cf206941d3b6-osd--block--9961abb4--5e3b--57c6--b852--cf206941d3b6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Q1Gc3L-FdZu-zuWJ-nsAv-cjJy-ZUop-F84tN5', 'scsi-0QEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf', 'scsi-SQEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:09.033996 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105', 'scsi-SQEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:09.034004 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-01-39-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:09.034064 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:05:09.034073 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--27c4b53f--c2bf--5253--84b2--9319684e0f9e-osd--block--27c4b53f--c2bf--5253--84b2--9319684e0f9e', 'dm-uuid-LVM-Pc55YnmQ0VktCZR8gyBjVYB68xFeQ7SyIBYDKlrKL18ENKM1lOFBMzNh1V9LMTjE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:09.034085 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6-osd--block--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6', 'dm-uuid-LVM-9vxFUtTAtak5ZkuWrKKfGIkUHDKTWX7JDm3eA8Y5r9JAYPgNze2tzfH9lV9upfLN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:09.034099 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:09.038277 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:09.038354 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:09.038363 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:09.038389 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:09.038409 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:09.038416 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:09.038437 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:09.038447 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part1', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part14', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part15', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part16', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:09.038464 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--27c4b53f--c2bf--5253--84b2--9319684e0f9e-osd--block--27c4b53f--c2bf--5253--84b2--9319684e0f9e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mluLyS-UGtI-41vG-BPyx-ooVb-U8x0-Mwltl0', 'scsi-0QEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e', 'scsi-SQEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:09.038477 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6-osd--block--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BrMDw9-eTd2-RE46-4U0W-jzmp-yuNA-1uTxr3', 'scsi-0QEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4', 'scsi-SQEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:20.327220 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d', 'scsi-SQEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:20.327315 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-01-39-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 03:05:20.327346 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:05:20.327356 | orchestrator | 2026-04-09 03:05:20.327365 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-09 03:05:20.327373 | orchestrator | Thursday 09 April 2026 03:05:09 +0000 (0:00:00.709) 0:00:19.990 ******** 2026-04-09 03:05:20.327381 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:05:20.327389 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:05:20.327396 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:05:20.327403 | orchestrator | 2026-04-09 03:05:20.327410 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-09 03:05:20.327417 | orchestrator | Thursday 09 April 2026 03:05:10 +0000 (0:00:00.977) 0:00:20.968 ******** 2026-04-09 03:05:20.327425 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:05:20.327432 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:05:20.327439 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:05:20.327446 | orchestrator | 2026-04-09 03:05:20.327453 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 03:05:20.327460 | orchestrator | Thursday 09 April 2026 03:05:10 +0000 (0:00:00.350) 0:00:21.319 ******** 2026-04-09 03:05:20.327467 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:05:20.327475 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:05:20.327482 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:05:20.327489 | orchestrator | 2026-04-09 03:05:20.327508 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 03:05:20.327516 | orchestrator | Thursday 09 April 2026 03:05:11 +0000 (0:00:00.694) 0:00:22.014 ******** 2026-04-09 03:05:20.327523 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:20.327532 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:05:20.327544 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:05:20.327556 | orchestrator | 2026-04-09 03:05:20.327574 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 03:05:20.327586 | orchestrator | Thursday 09 April 2026 03:05:11 +0000 (0:00:00.325) 0:00:22.339 ******** 2026-04-09 03:05:20.327598 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:20.327610 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:05:20.327621 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:05:20.327632 | orchestrator | 2026-04-09 03:05:20.327643 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 03:05:20.327656 | orchestrator | Thursday 09 April 2026 03:05:12 +0000 (0:00:00.764) 0:00:23.104 ******** 2026-04-09 03:05:20.327667 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:20.327680 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:05:20.327692 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:05:20.327704 | orchestrator | 2026-04-09 03:05:20.327717 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 03:05:20.327728 | orchestrator | Thursday 09 April 2026 03:05:12 +0000 (0:00:00.342) 0:00:23.446 ******** 2026-04-09 03:05:20.327739 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-09 03:05:20.327751 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-09 03:05:20.327763 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-09 03:05:20.327775 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-09 03:05:20.327787 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-09 03:05:20.327800 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-09 03:05:20.327811 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-09 03:05:20.327835 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-09 03:05:20.327915 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-09 03:05:20.327930 | orchestrator | 2026-04-09 03:05:20.327943 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 03:05:20.327957 | orchestrator | Thursday 09 April 2026 03:05:13 +0000 (0:00:01.234) 0:00:24.681 ******** 2026-04-09 03:05:20.327990 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-09 03:05:20.328004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-09 03:05:20.328017 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-09 03:05:20.328030 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:20.328043 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-09 03:05:20.328055 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-09 03:05:20.328068 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-09 03:05:20.328080 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:05:20.328093 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-09 03:05:20.328105 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-09 03:05:20.328117 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-09 03:05:20.328129 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:05:20.328142 | orchestrator | 2026-04-09 03:05:20.328154 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-09 03:05:20.328167 | orchestrator | Thursday 09 April 2026 03:05:14 +0000 (0:00:00.426) 0:00:25.108 ******** 2026-04-09 03:05:20.328180 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:05:20.328193 | orchestrator | 2026-04-09 03:05:20.328206 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 03:05:20.328220 | orchestrator | Thursday 09 April 2026 03:05:15 +0000 (0:00:00.815) 0:00:25.923 ******** 2026-04-09 03:05:20.328232 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:20.328245 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:05:20.328257 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:05:20.328270 | orchestrator | 2026-04-09 03:05:20.328282 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 03:05:20.328295 | orchestrator | Thursday 09 April 2026 03:05:15 +0000 (0:00:00.372) 0:00:26.296 ******** 2026-04-09 03:05:20.328307 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:20.328320 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:05:20.328332 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:05:20.328345 | orchestrator | 2026-04-09 03:05:20.328357 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 03:05:20.328370 | orchestrator | Thursday 09 April 2026 03:05:15 +0000 (0:00:00.341) 0:00:26.637 ******** 2026-04-09 03:05:20.328382 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:20.328395 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:05:20.328407 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:05:20.328419 | orchestrator | 2026-04-09 03:05:20.328432 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 03:05:20.328444 | orchestrator | Thursday 09 April 2026 03:05:16 +0000 (0:00:00.615) 0:00:27.253 ******** 2026-04-09 03:05:20.328457 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:05:20.328469 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:05:20.328481 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:05:20.328493 | orchestrator | 2026-04-09 03:05:20.328505 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 03:05:20.328517 | orchestrator | Thursday 09 April 2026 03:05:16 +0000 (0:00:00.498) 0:00:27.752 ******** 2026-04-09 03:05:20.328529 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 03:05:20.328550 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 03:05:20.328571 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 03:05:20.328584 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:20.328597 | orchestrator | 2026-04-09 03:05:20.328609 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 03:05:20.328721 | orchestrator | Thursday 09 April 2026 03:05:17 +0000 (0:00:00.421) 0:00:28.174 ******** 2026-04-09 03:05:20.328738 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 03:05:20.328751 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 03:05:20.328763 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 03:05:20.328775 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:20.328787 | orchestrator | 2026-04-09 03:05:20.328799 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 03:05:20.328811 | orchestrator | Thursday 09 April 2026 03:05:17 +0000 (0:00:00.400) 0:00:28.575 ******** 2026-04-09 03:05:20.328823 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 03:05:20.328835 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 03:05:20.328876 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 03:05:20.328890 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:05:20.328902 | orchestrator | 2026-04-09 03:05:20.328913 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 03:05:20.328925 | orchestrator | Thursday 09 April 2026 03:05:18 +0000 (0:00:00.446) 0:00:29.021 ******** 2026-04-09 03:05:20.328937 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:05:20.328949 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:05:20.328960 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:05:20.328972 | orchestrator | 2026-04-09 03:05:20.328984 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 03:05:20.328996 | orchestrator | Thursday 09 April 2026 03:05:18 +0000 (0:00:00.367) 0:00:29.388 ******** 2026-04-09 03:05:20.329008 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-09 03:05:20.329021 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-09 03:05:20.329034 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-09 03:05:20.329046 | orchestrator | 2026-04-09 03:05:20.329058 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-09 03:05:20.329070 | orchestrator | Thursday 09 April 2026 03:05:19 +0000 (0:00:00.837) 0:00:30.226 ******** 2026-04-09 03:05:20.329083 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 03:05:20.329109 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 03:07:00.455436 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 03:07:00.455522 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-09 03:07:00.455529 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 03:07:00.455534 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 03:07:00.455539 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 03:07:00.455543 | orchestrator | 2026-04-09 03:07:00.455548 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-09 03:07:00.455553 | orchestrator | Thursday 09 April 2026 03:05:20 +0000 (0:00:00.926) 0:00:31.152 ******** 2026-04-09 03:07:00.455557 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 03:07:00.455561 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 03:07:00.455565 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 03:07:00.455569 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-09 03:07:00.455589 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 03:07:00.455594 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 03:07:00.455597 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 03:07:00.455601 | orchestrator | 2026-04-09 03:07:00.455605 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-09 03:07:00.455609 | orchestrator | Thursday 09 April 2026 03:05:22 +0000 (0:00:01.854) 0:00:33.007 ******** 2026-04-09 03:07:00.455613 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:07:00.455618 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:07:00.455622 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-09 03:07:00.455626 | orchestrator | 2026-04-09 03:07:00.455630 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-09 03:07:00.455633 | orchestrator | Thursday 09 April 2026 03:05:22 +0000 (0:00:00.383) 0:00:33.390 ******** 2026-04-09 03:07:00.455639 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 03:07:00.455645 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 03:07:00.455660 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 03:07:00.455664 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 03:07:00.455668 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 03:07:00.455672 | orchestrator | 2026-04-09 03:07:00.455675 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-09 03:07:00.455679 | orchestrator | Thursday 09 April 2026 03:06:07 +0000 (0:00:44.870) 0:01:18.260 ******** 2026-04-09 03:07:00.455683 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:07:00.455687 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:07:00.455742 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:07:00.455749 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:07:00.455756 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:07:00.455763 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:07:00.455767 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-09 03:07:00.455771 | orchestrator | 2026-04-09 03:07:00.455775 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-09 03:07:00.455778 | orchestrator | Thursday 09 April 2026 03:06:32 +0000 (0:00:24.581) 0:01:42.842 ******** 2026-04-09 03:07:00.455793 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:07:00.455803 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:07:00.455807 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:07:00.455810 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:07:00.455814 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:07:00.455818 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:07:00.455822 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 03:07:00.455825 | orchestrator | 2026-04-09 03:07:00.455829 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-09 03:07:00.455833 | orchestrator | Thursday 09 April 2026 03:06:43 +0000 (0:00:11.525) 0:01:54.368 ******** 2026-04-09 03:07:00.455837 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:07:00.455841 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 03:07:00.455844 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 03:07:00.455848 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:07:00.455852 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 03:07:00.455856 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 03:07:00.455860 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:07:00.455863 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 03:07:00.455867 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 03:07:00.455871 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:07:00.455874 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 03:07:00.455878 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 03:07:00.455882 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:07:00.455886 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 03:07:00.455889 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 03:07:00.455893 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 03:07:00.455897 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 03:07:00.455901 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 03:07:00.455905 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-09 03:07:00.455909 | orchestrator | 2026-04-09 03:07:00.455913 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 03:07:00.455921 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-09 03:07:00.455926 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-09 03:07:00.455931 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-09 03:07:00.455935 | orchestrator | 2026-04-09 03:07:00.455938 | orchestrator | 2026-04-09 03:07:00.455942 | orchestrator | 2026-04-09 03:07:00.455946 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 03:07:00.455950 | orchestrator | Thursday 09 April 2026 03:07:00 +0000 (0:00:16.891) 0:02:11.260 ******** 2026-04-09 03:07:00.455953 | orchestrator | =============================================================================== 2026-04-09 03:07:00.455960 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.87s 2026-04-09 03:07:00.455964 | orchestrator | generate keys ---------------------------------------------------------- 24.58s 2026-04-09 03:07:00.455968 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 16.89s 2026-04-09 03:07:00.455972 | orchestrator | get keys from monitors ------------------------------------------------- 11.53s 2026-04-09 03:07:00.455975 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.31s 2026-04-09 03:07:00.455979 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.86s 2026-04-09 03:07:00.455983 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.71s 2026-04-09 03:07:00.455986 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.23s 2026-04-09 03:07:00.455990 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.16s 2026-04-09 03:07:00.455994 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.98s 2026-04-09 03:07:00.455997 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 0.94s 2026-04-09 03:07:00.456001 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.93s 2026-04-09 03:07:00.456005 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.89s 2026-04-09 03:07:00.456012 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.84s 2026-04-09 03:07:00.872079 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.82s 2026-04-09 03:07:00.872152 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.76s 2026-04-09 03:07:00.872157 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.76s 2026-04-09 03:07:00.872161 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.76s 2026-04-09 03:07:00.872166 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.75s 2026-04-09 03:07:00.872170 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.71s 2026-04-09 03:07:03.466496 | orchestrator | 2026-04-09 03:07:03 | INFO  | Task 158edfe0-754c-4fc0-86cf-2067e9a0b234 (copy-ceph-keys) was prepared for execution. 2026-04-09 03:07:03.466587 | orchestrator | 2026-04-09 03:07:03 | INFO  | It takes a moment until task 158edfe0-754c-4fc0-86cf-2067e9a0b234 (copy-ceph-keys) has been started and output is visible here. 2026-04-09 03:07:44.032020 | orchestrator | 2026-04-09 03:07:44.032105 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-09 03:07:44.032112 | orchestrator | 2026-04-09 03:07:44.032118 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-09 03:07:44.032123 | orchestrator | Thursday 09 April 2026 03:07:08 +0000 (0:00:00.187) 0:00:00.187 ******** 2026-04-09 03:07:44.032128 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-09 03:07:44.032135 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-09 03:07:44.032140 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-09 03:07:44.032144 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-09 03:07:44.032149 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-09 03:07:44.032154 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-09 03:07:44.032159 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-09 03:07:44.032163 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-09 03:07:44.032186 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-09 03:07:44.032191 | orchestrator | 2026-04-09 03:07:44.032198 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-09 03:07:44.032206 | orchestrator | Thursday 09 April 2026 03:07:12 +0000 (0:00:04.541) 0:00:04.729 ******** 2026-04-09 03:07:44.032214 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-09 03:07:44.032236 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-09 03:07:44.032243 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-09 03:07:44.032252 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-09 03:07:44.032260 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-09 03:07:44.032268 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-09 03:07:44.032277 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-09 03:07:44.032285 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-09 03:07:44.032295 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-09 03:07:44.032300 | orchestrator | 2026-04-09 03:07:44.032304 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-09 03:07:44.032309 | orchestrator | Thursday 09 April 2026 03:07:17 +0000 (0:00:04.384) 0:00:09.113 ******** 2026-04-09 03:07:44.032314 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-09 03:07:44.032319 | orchestrator | 2026-04-09 03:07:44.032324 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-09 03:07:44.032328 | orchestrator | Thursday 09 April 2026 03:07:18 +0000 (0:00:01.079) 0:00:10.193 ******** 2026-04-09 03:07:44.032333 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-09 03:07:44.032338 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-09 03:07:44.032343 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-09 03:07:44.032348 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-09 03:07:44.032353 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-09 03:07:44.032358 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-09 03:07:44.032366 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-09 03:07:44.032377 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-09 03:07:44.032386 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-09 03:07:44.032393 | orchestrator | 2026-04-09 03:07:44.032400 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-09 03:07:44.032407 | orchestrator | Thursday 09 April 2026 03:07:32 +0000 (0:00:14.575) 0:00:24.769 ******** 2026-04-09 03:07:44.032414 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-09 03:07:44.032421 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-09 03:07:44.032429 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-09 03:07:44.032436 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-09 03:07:44.032457 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-09 03:07:44.032474 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-09 03:07:44.032481 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-09 03:07:44.032488 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-09 03:07:44.032495 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-09 03:07:44.032502 | orchestrator | 2026-04-09 03:07:44.032509 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-09 03:07:44.032516 | orchestrator | Thursday 09 April 2026 03:07:36 +0000 (0:00:03.445) 0:00:28.214 ******** 2026-04-09 03:07:44.032525 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-09 03:07:44.032533 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-09 03:07:44.032540 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-09 03:07:44.032548 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-09 03:07:44.032555 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-09 03:07:44.032562 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-09 03:07:44.032569 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-09 03:07:44.032576 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-09 03:07:44.032584 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-09 03:07:44.032593 | orchestrator | 2026-04-09 03:07:44.032602 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 03:07:44.032616 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 03:07:44.032681 | orchestrator | 2026-04-09 03:07:44.032689 | orchestrator | 2026-04-09 03:07:44.032694 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 03:07:44.032700 | orchestrator | Thursday 09 April 2026 03:07:43 +0000 (0:00:07.502) 0:00:35.717 ******** 2026-04-09 03:07:44.032706 | orchestrator | =============================================================================== 2026-04-09 03:07:44.032711 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.58s 2026-04-09 03:07:44.032717 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.50s 2026-04-09 03:07:44.032722 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.54s 2026-04-09 03:07:44.032728 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.38s 2026-04-09 03:07:44.032733 | orchestrator | Check if target directories exist --------------------------------------- 3.45s 2026-04-09 03:07:44.032739 | orchestrator | Create share directory -------------------------------------------------- 1.08s 2026-04-09 03:07:56.828685 | orchestrator | 2026-04-09 03:07:56 | INFO  | Task 9569b663-326f-4c80-9c07-8352929c9223 (cephclient) was prepared for execution. 2026-04-09 03:07:56.828795 | orchestrator | 2026-04-09 03:07:56 | INFO  | It takes a moment until task 9569b663-326f-4c80-9c07-8352929c9223 (cephclient) has been started and output is visible here. 2026-04-09 03:09:00.537279 | orchestrator | 2026-04-09 03:09:00.537369 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-09 03:09:00.537379 | orchestrator | 2026-04-09 03:09:00.537386 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-09 03:09:00.537392 | orchestrator | Thursday 09 April 2026 03:08:01 +0000 (0:00:00.250) 0:00:00.250 ******** 2026-04-09 03:09:00.537398 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-09 03:09:00.537423 | orchestrator | 2026-04-09 03:09:00.537430 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-09 03:09:00.537435 | orchestrator | Thursday 09 April 2026 03:08:01 +0000 (0:00:00.255) 0:00:00.506 ******** 2026-04-09 03:09:00.537441 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-09 03:09:00.537447 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-09 03:09:00.537453 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-09 03:09:00.537459 | orchestrator | 2026-04-09 03:09:00.537465 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-09 03:09:00.537470 | orchestrator | Thursday 09 April 2026 03:08:03 +0000 (0:00:01.343) 0:00:01.849 ******** 2026-04-09 03:09:00.537477 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-09 03:09:00.537482 | orchestrator | 2026-04-09 03:09:00.537488 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-09 03:09:00.537493 | orchestrator | Thursday 09 April 2026 03:08:04 +0000 (0:00:01.699) 0:00:03.549 ******** 2026-04-09 03:09:00.537499 | orchestrator | changed: [testbed-manager] 2026-04-09 03:09:00.537504 | orchestrator | 2026-04-09 03:09:00.537510 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-09 03:09:00.537515 | orchestrator | Thursday 09 April 2026 03:08:05 +0000 (0:00:01.084) 0:00:04.634 ******** 2026-04-09 03:09:00.537520 | orchestrator | changed: [testbed-manager] 2026-04-09 03:09:00.537568 | orchestrator | 2026-04-09 03:09:00.537574 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-09 03:09:00.537580 | orchestrator | Thursday 09 April 2026 03:08:06 +0000 (0:00:01.033) 0:00:05.667 ******** 2026-04-09 03:09:00.537585 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-09 03:09:00.537591 | orchestrator | ok: [testbed-manager] 2026-04-09 03:09:00.537596 | orchestrator | 2026-04-09 03:09:00.537602 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-09 03:09:00.537607 | orchestrator | Thursday 09 April 2026 03:08:49 +0000 (0:00:42.685) 0:00:48.353 ******** 2026-04-09 03:09:00.537613 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-09 03:09:00.537619 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-09 03:09:00.537624 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-09 03:09:00.537630 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-09 03:09:00.537635 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-09 03:09:00.537641 | orchestrator | 2026-04-09 03:09:00.537647 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-09 03:09:00.537652 | orchestrator | Thursday 09 April 2026 03:08:54 +0000 (0:00:04.416) 0:00:52.769 ******** 2026-04-09 03:09:00.537658 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-09 03:09:00.537663 | orchestrator | 2026-04-09 03:09:00.537671 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-09 03:09:00.537680 | orchestrator | Thursday 09 April 2026 03:08:54 +0000 (0:00:00.491) 0:00:53.261 ******** 2026-04-09 03:09:00.537690 | orchestrator | skipping: [testbed-manager] 2026-04-09 03:09:00.537699 | orchestrator | 2026-04-09 03:09:00.537708 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-09 03:09:00.537716 | orchestrator | Thursday 09 April 2026 03:08:54 +0000 (0:00:00.140) 0:00:53.402 ******** 2026-04-09 03:09:00.537725 | orchestrator | skipping: [testbed-manager] 2026-04-09 03:09:00.537734 | orchestrator | 2026-04-09 03:09:00.537744 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-09 03:09:00.537753 | orchestrator | Thursday 09 April 2026 03:08:55 +0000 (0:00:00.551) 0:00:53.953 ******** 2026-04-09 03:09:00.537776 | orchestrator | changed: [testbed-manager] 2026-04-09 03:09:00.537783 | orchestrator | 2026-04-09 03:09:00.537789 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-09 03:09:00.537805 | orchestrator | Thursday 09 April 2026 03:08:57 +0000 (0:00:01.943) 0:00:55.897 ******** 2026-04-09 03:09:00.537811 | orchestrator | changed: [testbed-manager] 2026-04-09 03:09:00.537816 | orchestrator | 2026-04-09 03:09:00.537822 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-09 03:09:00.537828 | orchestrator | Thursday 09 April 2026 03:08:57 +0000 (0:00:00.791) 0:00:56.689 ******** 2026-04-09 03:09:00.537833 | orchestrator | changed: [testbed-manager] 2026-04-09 03:09:00.537839 | orchestrator | 2026-04-09 03:09:00.537845 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-09 03:09:00.537851 | orchestrator | Thursday 09 April 2026 03:08:58 +0000 (0:00:00.599) 0:00:57.288 ******** 2026-04-09 03:09:00.537857 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-09 03:09:00.537862 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-09 03:09:00.537868 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-09 03:09:00.537874 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-09 03:09:00.537880 | orchestrator | 2026-04-09 03:09:00.537886 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 03:09:00.537892 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 03:09:00.537917 | orchestrator | 2026-04-09 03:09:00.537924 | orchestrator | 2026-04-09 03:09:00.537943 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 03:09:00.537950 | orchestrator | Thursday 09 April 2026 03:09:00 +0000 (0:00:01.556) 0:00:58.845 ******** 2026-04-09 03:09:00.537955 | orchestrator | =============================================================================== 2026-04-09 03:09:00.537961 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.69s 2026-04-09 03:09:00.537967 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.42s 2026-04-09 03:09:00.537972 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.94s 2026-04-09 03:09:00.537978 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.70s 2026-04-09 03:09:00.537984 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.56s 2026-04-09 03:09:00.537989 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.34s 2026-04-09 03:09:00.537995 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.08s 2026-04-09 03:09:00.538001 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.03s 2026-04-09 03:09:00.538006 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.79s 2026-04-09 03:09:00.538012 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.60s 2026-04-09 03:09:00.538062 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.55s 2026-04-09 03:09:00.538068 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.49s 2026-04-09 03:09:00.538073 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.26s 2026-04-09 03:09:00.538079 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-04-09 03:09:03.109878 | orchestrator | 2026-04-09 03:09:03 | INFO  | Task f40df4ea-9cd5-48c6-a53e-e8c3935e64c0 (ceph-bootstrap-dashboard) was prepared for execution. 2026-04-09 03:09:03.109986 | orchestrator | 2026-04-09 03:09:03 | INFO  | It takes a moment until task f40df4ea-9cd5-48c6-a53e-e8c3935e64c0 (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-04-09 03:10:23.740486 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-09 03:10:23.740703 | orchestrator | 2.16.14 2026-04-09 03:10:23.741679 | orchestrator | 2026-04-09 03:10:23.741720 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-04-09 03:10:23.741741 | orchestrator | 2026-04-09 03:10:23.741756 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-04-09 03:10:23.741819 | orchestrator | Thursday 09 April 2026 03:09:07 +0000 (0:00:00.315) 0:00:00.315 ******** 2026-04-09 03:10:23.741842 | orchestrator | changed: [testbed-manager] 2026-04-09 03:10:23.741862 | orchestrator | 2026-04-09 03:10:23.741880 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-04-09 03:10:23.741898 | orchestrator | Thursday 09 April 2026 03:09:09 +0000 (0:00:01.898) 0:00:02.213 ******** 2026-04-09 03:10:23.741916 | orchestrator | changed: [testbed-manager] 2026-04-09 03:10:23.741934 | orchestrator | 2026-04-09 03:10:23.741953 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-04-09 03:10:23.741971 | orchestrator | Thursday 09 April 2026 03:09:10 +0000 (0:00:01.138) 0:00:03.352 ******** 2026-04-09 03:10:23.741989 | orchestrator | changed: [testbed-manager] 2026-04-09 03:10:23.742008 | orchestrator | 2026-04-09 03:10:23.742104 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-04-09 03:10:23.742125 | orchestrator | Thursday 09 April 2026 03:09:12 +0000 (0:00:01.178) 0:00:04.531 ******** 2026-04-09 03:10:23.742144 | orchestrator | changed: [testbed-manager] 2026-04-09 03:10:23.742163 | orchestrator | 2026-04-09 03:10:23.742184 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-04-09 03:10:23.742204 | orchestrator | Thursday 09 April 2026 03:09:13 +0000 (0:00:01.287) 0:00:05.819 ******** 2026-04-09 03:10:23.742223 | orchestrator | changed: [testbed-manager] 2026-04-09 03:10:23.742242 | orchestrator | 2026-04-09 03:10:23.742261 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-04-09 03:10:23.742304 | orchestrator | Thursday 09 April 2026 03:09:14 +0000 (0:00:01.111) 0:00:06.930 ******** 2026-04-09 03:10:23.742325 | orchestrator | changed: [testbed-manager] 2026-04-09 03:10:23.742361 | orchestrator | 2026-04-09 03:10:23.742393 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-04-09 03:10:23.742412 | orchestrator | Thursday 09 April 2026 03:09:15 +0000 (0:00:01.143) 0:00:08.074 ******** 2026-04-09 03:10:23.742458 | orchestrator | changed: [testbed-manager] 2026-04-09 03:10:23.742479 | orchestrator | 2026-04-09 03:10:23.742498 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-04-09 03:10:23.742518 | orchestrator | Thursday 09 April 2026 03:09:17 +0000 (0:00:02.075) 0:00:10.149 ******** 2026-04-09 03:10:23.742536 | orchestrator | changed: [testbed-manager] 2026-04-09 03:10:23.742553 | orchestrator | 2026-04-09 03:10:23.742571 | orchestrator | TASK [Create admin user] ******************************************************* 2026-04-09 03:10:23.742589 | orchestrator | Thursday 09 April 2026 03:09:19 +0000 (0:00:01.321) 0:00:11.471 ******** 2026-04-09 03:10:23.742609 | orchestrator | changed: [testbed-manager] 2026-04-09 03:10:23.742629 | orchestrator | 2026-04-09 03:10:23.742647 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-04-09 03:10:23.742665 | orchestrator | Thursday 09 April 2026 03:09:58 +0000 (0:00:39.478) 0:00:50.950 ******** 2026-04-09 03:10:23.742684 | orchestrator | skipping: [testbed-manager] 2026-04-09 03:10:23.742701 | orchestrator | 2026-04-09 03:10:23.742719 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-09 03:10:23.742738 | orchestrator | 2026-04-09 03:10:23.742758 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-09 03:10:23.742778 | orchestrator | Thursday 09 April 2026 03:09:58 +0000 (0:00:00.193) 0:00:51.143 ******** 2026-04-09 03:10:23.742798 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:10:23.742817 | orchestrator | 2026-04-09 03:10:23.742838 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-09 03:10:23.742857 | orchestrator | 2026-04-09 03:10:23.742877 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-09 03:10:23.742897 | orchestrator | Thursday 09 April 2026 03:10:10 +0000 (0:00:11.943) 0:01:03.087 ******** 2026-04-09 03:10:23.742916 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:10:23.742936 | orchestrator | 2026-04-09 03:10:23.742955 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-09 03:10:23.742997 | orchestrator | 2026-04-09 03:10:23.743017 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-09 03:10:23.743037 | orchestrator | Thursday 09 April 2026 03:10:11 +0000 (0:00:01.290) 0:01:04.377 ******** 2026-04-09 03:10:23.743059 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:10:23.743079 | orchestrator | 2026-04-09 03:10:23.743098 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 03:10:23.743120 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 03:10:23.743142 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 03:10:23.743163 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 03:10:23.743180 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 03:10:23.743197 | orchestrator | 2026-04-09 03:10:23.743214 | orchestrator | 2026-04-09 03:10:23.743230 | orchestrator | 2026-04-09 03:10:23.743248 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 03:10:23.743266 | orchestrator | Thursday 09 April 2026 03:10:23 +0000 (0:00:11.380) 0:01:15.757 ******** 2026-04-09 03:10:23.743284 | orchestrator | =============================================================================== 2026-04-09 03:10:23.743303 | orchestrator | Create admin user ------------------------------------------------------ 39.48s 2026-04-09 03:10:23.743379 | orchestrator | Restart ceph manager service ------------------------------------------- 24.61s 2026-04-09 03:10:23.743402 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.08s 2026-04-09 03:10:23.743420 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.90s 2026-04-09 03:10:23.743485 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.32s 2026-04-09 03:10:23.743504 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.29s 2026-04-09 03:10:23.743522 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.18s 2026-04-09 03:10:23.743539 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.14s 2026-04-09 03:10:23.743558 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.14s 2026-04-09 03:10:23.743576 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.11s 2026-04-09 03:10:23.743596 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.19s 2026-04-09 03:10:24.114642 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-04-09 03:10:26.344071 | orchestrator | 2026-04-09 03:10:26 | INFO  | Task 18a2b672-f4f8-4727-9ba8-e12f15297c02 (keystone) was prepared for execution. 2026-04-09 03:10:26.344149 | orchestrator | 2026-04-09 03:10:26 | INFO  | It takes a moment until task 18a2b672-f4f8-4727-9ba8-e12f15297c02 (keystone) has been started and output is visible here. 2026-04-09 03:10:34.246216 | orchestrator | 2026-04-09 03:10:34.246340 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 03:10:34.246360 | orchestrator | 2026-04-09 03:10:34.246374 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 03:10:34.246407 | orchestrator | Thursday 09 April 2026 03:10:31 +0000 (0:00:00.313) 0:00:00.313 ******** 2026-04-09 03:10:34.246486 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:10:34.246501 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:10:34.246514 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:10:34.246527 | orchestrator | 2026-04-09 03:10:34.246541 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 03:10:34.246553 | orchestrator | Thursday 09 April 2026 03:10:31 +0000 (0:00:00.343) 0:00:00.657 ******** 2026-04-09 03:10:34.246596 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-09 03:10:34.246612 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-09 03:10:34.246625 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-09 03:10:34.246638 | orchestrator | 2026-04-09 03:10:34.246652 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-09 03:10:34.246666 | orchestrator | 2026-04-09 03:10:34.246680 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-09 03:10:34.246694 | orchestrator | Thursday 09 April 2026 03:10:31 +0000 (0:00:00.481) 0:00:01.138 ******** 2026-04-09 03:10:34.246709 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:10:34.246718 | orchestrator | 2026-04-09 03:10:34.246726 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-09 03:10:34.246734 | orchestrator | Thursday 09 April 2026 03:10:32 +0000 (0:00:00.644) 0:00:01.783 ******** 2026-04-09 03:10:34.246748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 03:10:34.246763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 03:10:34.246803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 03:10:34.246827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 03:10:34.246842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 03:10:34.246854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 03:10:34.246864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 03:10:34.246873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 03:10:34.246883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 03:10:34.246898 | orchestrator | 2026-04-09 03:10:34.246907 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-09 03:10:34.246923 | orchestrator | Thursday 09 April 2026 03:10:34 +0000 (0:00:01.699) 0:00:03.483 ******** 2026-04-09 03:10:40.199112 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:10:40.199209 | orchestrator | 2026-04-09 03:10:40.199226 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-09 03:10:40.199255 | orchestrator | Thursday 09 April 2026 03:10:34 +0000 (0:00:00.324) 0:00:03.808 ******** 2026-04-09 03:10:40.199264 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:10:40.199271 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:10:40.199279 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:10:40.199286 | orchestrator | 2026-04-09 03:10:40.199293 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-09 03:10:40.199301 | orchestrator | Thursday 09 April 2026 03:10:34 +0000 (0:00:00.325) 0:00:04.133 ******** 2026-04-09 03:10:40.199308 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 03:10:40.199316 | orchestrator | 2026-04-09 03:10:40.199323 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-09 03:10:40.199330 | orchestrator | Thursday 09 April 2026 03:10:35 +0000 (0:00:00.894) 0:00:05.028 ******** 2026-04-09 03:10:40.199338 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:10:40.199346 | orchestrator | 2026-04-09 03:10:40.199353 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-09 03:10:40.199360 | orchestrator | Thursday 09 April 2026 03:10:36 +0000 (0:00:00.583) 0:00:05.611 ******** 2026-04-09 03:10:40.199373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 03:10:40.199384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 03:10:40.199393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 03:10:40.199487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 03:10:40.199500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 03:10:40.199508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 03:10:40.199516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 03:10:40.199523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 03:10:40.199537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 03:10:40.199545 | orchestrator | 2026-04-09 03:10:40.199552 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-09 03:10:40.199560 | orchestrator | Thursday 09 April 2026 03:10:39 +0000 (0:00:03.226) 0:00:08.837 ******** 2026-04-09 03:10:40.199575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-09 03:10:41.046691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 03:10:41.046801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 03:10:41.046821 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:10:41.046836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-09 03:10:41.046899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 03:10:41.046911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 03:10:41.046918 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:10:41.046942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-09 03:10:41.046949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 03:10:41.046956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 03:10:41.046976 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:10:41.046983 | orchestrator | 2026-04-09 03:10:41.046990 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-09 03:10:41.046998 | orchestrator | Thursday 09 April 2026 03:10:40 +0000 (0:00:00.604) 0:00:09.442 ******** 2026-04-09 03:10:41.047005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-09 03:10:41.047021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 03:10:41.047043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 03:10:44.291781 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:10:44.291926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-09 03:10:44.291959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 03:10:44.292056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 03:10:44.292078 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:10:44.292111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-09 03:10:44.292141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 03:10:44.292191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 03:10:44.292212 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:10:44.292231 | orchestrator | 2026-04-09 03:10:44.292250 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-09 03:10:44.292270 | orchestrator | Thursday 09 April 2026 03:10:41 +0000 (0:00:00.847) 0:00:10.290 ******** 2026-04-09 03:10:44.292290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 03:10:44.292328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 03:10:44.292360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 03:10:44.292394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 03:10:49.249602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 03:10:49.249730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 03:10:49.249751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 03:10:49.249766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 03:10:49.249799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 03:10:49.249816 | orchestrator | 2026-04-09 03:10:49.249834 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-09 03:10:49.249850 | orchestrator | Thursday 09 April 2026 03:10:44 +0000 (0:00:03.242) 0:00:13.533 ******** 2026-04-09 03:10:49.249894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 03:10:49.249932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 03:10:49.249948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 03:10:49.249970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 03:10:49.249986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 03:10:49.250008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 03:10:53.162900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 03:10:53.163015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 03:10:53.163026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 03:10:53.163034 | orchestrator | 2026-04-09 03:10:53.163042 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-09 03:10:53.163052 | orchestrator | Thursday 09 April 2026 03:10:49 +0000 (0:00:04.957) 0:00:18.491 ******** 2026-04-09 03:10:53.163059 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:10:53.163067 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:10:53.163074 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:10:53.163081 | orchestrator | 2026-04-09 03:10:53.163090 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-09 03:10:53.163096 | orchestrator | Thursday 09 April 2026 03:10:50 +0000 (0:00:01.417) 0:00:19.909 ******** 2026-04-09 03:10:53.163103 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:10:53.163110 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:10:53.163117 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:10:53.163124 | orchestrator | 2026-04-09 03:10:53.163131 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-09 03:10:53.163138 | orchestrator | Thursday 09 April 2026 03:10:51 +0000 (0:00:00.850) 0:00:20.759 ******** 2026-04-09 03:10:53.163145 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:10:53.163152 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:10:53.163159 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:10:53.163166 | orchestrator | 2026-04-09 03:10:53.163185 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-09 03:10:53.163193 | orchestrator | Thursday 09 April 2026 03:10:52 +0000 (0:00:00.586) 0:00:21.345 ******** 2026-04-09 03:10:53.163200 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:10:53.163206 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:10:53.163214 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:10:53.163220 | orchestrator | 2026-04-09 03:10:53.163228 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-09 03:10:53.163235 | orchestrator | Thursday 09 April 2026 03:10:52 +0000 (0:00:00.371) 0:00:21.717 ******** 2026-04-09 03:10:53.163260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-09 03:10:53.163274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 03:10:53.163281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 03:10:53.163287 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:10:53.163295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-09 03:10:53.163308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 03:10:53.163315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 03:10:53.163331 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:10:53.163345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-09 03:11:12.547101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 03:11:12.547233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 03:11:12.547265 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:11:12.547288 | orchestrator | 2026-04-09 03:11:12.547309 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-09 03:11:12.547331 | orchestrator | Thursday 09 April 2026 03:10:53 +0000 (0:00:00.682) 0:00:22.400 ******** 2026-04-09 03:11:12.547350 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:11:12.547402 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:11:12.547423 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:11:12.547441 | orchestrator | 2026-04-09 03:11:12.547459 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-09 03:11:12.547479 | orchestrator | Thursday 09 April 2026 03:10:53 +0000 (0:00:00.308) 0:00:22.708 ******** 2026-04-09 03:11:12.547498 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-09 03:11:12.547519 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-09 03:11:12.547580 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-09 03:11:12.547592 | orchestrator | 2026-04-09 03:11:12.547619 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-09 03:11:12.547637 | orchestrator | Thursday 09 April 2026 03:10:55 +0000 (0:00:01.941) 0:00:24.650 ******** 2026-04-09 03:11:12.547656 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 03:11:12.547674 | orchestrator | 2026-04-09 03:11:12.547693 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-09 03:11:12.547711 | orchestrator | Thursday 09 April 2026 03:10:56 +0000 (0:00:00.988) 0:00:25.638 ******** 2026-04-09 03:11:12.547729 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:11:12.547746 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:11:12.547765 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:11:12.547784 | orchestrator | 2026-04-09 03:11:12.547802 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-09 03:11:12.547821 | orchestrator | Thursday 09 April 2026 03:10:56 +0000 (0:00:00.605) 0:00:26.244 ******** 2026-04-09 03:11:12.547837 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 03:11:12.547849 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 03:11:12.547859 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 03:11:12.547870 | orchestrator | 2026-04-09 03:11:12.547881 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-09 03:11:12.547893 | orchestrator | Thursday 09 April 2026 03:10:58 +0000 (0:00:01.125) 0:00:27.370 ******** 2026-04-09 03:11:12.547904 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:11:12.547915 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:11:12.547926 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:11:12.547937 | orchestrator | 2026-04-09 03:11:12.547948 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-09 03:11:12.547958 | orchestrator | Thursday 09 April 2026 03:10:58 +0000 (0:00:00.574) 0:00:27.944 ******** 2026-04-09 03:11:12.547969 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-09 03:11:12.547980 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-09 03:11:12.547992 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-09 03:11:12.548002 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-09 03:11:12.548013 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-09 03:11:12.548024 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-09 03:11:12.548035 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-09 03:11:12.548046 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-09 03:11:12.548078 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-09 03:11:12.548089 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-09 03:11:12.548100 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-09 03:11:12.548111 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-09 03:11:12.548122 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-09 03:11:12.548133 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-09 03:11:12.548143 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-09 03:11:12.548167 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 03:11:12.548178 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 03:11:12.548189 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 03:11:12.548200 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 03:11:12.548211 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 03:11:12.548221 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 03:11:12.548232 | orchestrator | 2026-04-09 03:11:12.548242 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-09 03:11:12.548253 | orchestrator | Thursday 09 April 2026 03:11:07 +0000 (0:00:08.797) 0:00:36.741 ******** 2026-04-09 03:11:12.548263 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 03:11:12.548274 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 03:11:12.548284 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 03:11:12.548295 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 03:11:12.548306 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 03:11:12.548316 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 03:11:12.548327 | orchestrator | 2026-04-09 03:11:12.548337 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-04-09 03:11:12.548356 | orchestrator | Thursday 09 April 2026 03:11:10 +0000 (0:00:02.726) 0:00:39.468 ******** 2026-04-09 03:11:12.548405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 03:11:12.548442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 03:12:57.760011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 03:12:57.760173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 03:12:57.760219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 03:12:57.760237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 03:12:57.760246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 03:12:57.760384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 03:12:57.760423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 03:12:57.760440 | orchestrator | 2026-04-09 03:12:57.760458 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-09 03:12:57.760477 | orchestrator | Thursday 09 April 2026 03:11:12 +0000 (0:00:02.319) 0:00:41.787 ******** 2026-04-09 03:12:57.760493 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:12:57.760510 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:12:57.760519 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:12:57.760527 | orchestrator | 2026-04-09 03:12:57.760536 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-09 03:12:57.760547 | orchestrator | Thursday 09 April 2026 03:11:13 +0000 (0:00:00.571) 0:00:42.359 ******** 2026-04-09 03:12:57.760557 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:12:57.760594 | orchestrator | 2026-04-09 03:12:57.760605 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-04-09 03:12:57.760616 | orchestrator | Thursday 09 April 2026 03:11:15 +0000 (0:00:02.284) 0:00:44.644 ******** 2026-04-09 03:12:57.760625 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:12:57.760636 | orchestrator | 2026-04-09 03:12:57.760647 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-04-09 03:12:57.760657 | orchestrator | Thursday 09 April 2026 03:11:17 +0000 (0:00:02.252) 0:00:46.896 ******** 2026-04-09 03:12:57.760667 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:12:57.760678 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:12:57.760687 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:12:57.760698 | orchestrator | 2026-04-09 03:12:57.760708 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-04-09 03:12:57.760718 | orchestrator | Thursday 09 April 2026 03:11:18 +0000 (0:00:00.835) 0:00:47.732 ******** 2026-04-09 03:12:57.760728 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:12:57.760738 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:12:57.760748 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:12:57.760758 | orchestrator | 2026-04-09 03:12:57.760768 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-04-09 03:12:57.760787 | orchestrator | Thursday 09 April 2026 03:11:18 +0000 (0:00:00.350) 0:00:48.082 ******** 2026-04-09 03:12:57.760798 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:12:57.760808 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:12:57.760818 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:12:57.760828 | orchestrator | 2026-04-09 03:12:57.760838 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-04-09 03:12:57.760848 | orchestrator | Thursday 09 April 2026 03:11:19 +0000 (0:00:00.634) 0:00:48.716 ******** 2026-04-09 03:12:57.760858 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:12:57.760868 | orchestrator | 2026-04-09 03:12:57.760878 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-04-09 03:12:57.760888 | orchestrator | Thursday 09 April 2026 03:11:34 +0000 (0:00:15.079) 0:01:03.796 ******** 2026-04-09 03:12:57.760898 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:12:57.760908 | orchestrator | 2026-04-09 03:12:57.760918 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-09 03:12:57.760928 | orchestrator | Thursday 09 April 2026 03:11:45 +0000 (0:00:11.117) 0:01:14.913 ******** 2026-04-09 03:12:57.760945 | orchestrator | 2026-04-09 03:12:57.760954 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-09 03:12:57.760962 | orchestrator | Thursday 09 April 2026 03:11:45 +0000 (0:00:00.066) 0:01:14.980 ******** 2026-04-09 03:12:57.760971 | orchestrator | 2026-04-09 03:12:57.760980 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-09 03:12:57.760988 | orchestrator | Thursday 09 April 2026 03:11:45 +0000 (0:00:00.070) 0:01:15.050 ******** 2026-04-09 03:12:57.760997 | orchestrator | 2026-04-09 03:12:57.761005 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-09 03:12:57.761014 | orchestrator | Thursday 09 April 2026 03:11:45 +0000 (0:00:00.079) 0:01:15.130 ******** 2026-04-09 03:12:57.761022 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:12:57.761031 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:12:57.761040 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:12:57.761048 | orchestrator | 2026-04-09 03:12:57.761057 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-09 03:12:57.761065 | orchestrator | Thursday 09 April 2026 03:12:34 +0000 (0:00:48.795) 0:02:03.925 ******** 2026-04-09 03:12:57.761074 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:12:57.761082 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:12:57.761091 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:12:57.761099 | orchestrator | 2026-04-09 03:12:57.761108 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-09 03:12:57.761117 | orchestrator | Thursday 09 April 2026 03:12:45 +0000 (0:00:10.491) 0:02:14.416 ******** 2026-04-09 03:12:57.761125 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:12:57.761134 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:12:57.761142 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:12:57.761151 | orchestrator | 2026-04-09 03:12:57.761160 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-09 03:12:57.761168 | orchestrator | Thursday 09 April 2026 03:12:57 +0000 (0:00:11.943) 0:02:26.360 ******** 2026-04-09 03:12:57.761184 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:13:48.540189 | orchestrator | 2026-04-09 03:13:48.540335 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-04-09 03:13:48.540352 | orchestrator | Thursday 09 April 2026 03:12:57 +0000 (0:00:00.643) 0:02:27.003 ******** 2026-04-09 03:13:48.540361 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:13:48.540372 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:13:48.540382 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:13:48.540391 | orchestrator | 2026-04-09 03:13:48.540400 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-04-09 03:13:48.540409 | orchestrator | Thursday 09 April 2026 03:12:59 +0000 (0:00:01.268) 0:02:28.272 ******** 2026-04-09 03:13:48.540419 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:13:48.540428 | orchestrator | 2026-04-09 03:13:48.540437 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-04-09 03:13:48.540446 | orchestrator | Thursday 09 April 2026 03:13:00 +0000 (0:00:01.885) 0:02:30.157 ******** 2026-04-09 03:13:48.540455 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-04-09 03:13:48.540464 | orchestrator | 2026-04-09 03:13:48.540472 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-04-09 03:13:48.540481 | orchestrator | Thursday 09 April 2026 03:13:12 +0000 (0:00:11.744) 0:02:41.902 ******** 2026-04-09 03:13:48.540490 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-04-09 03:13:48.540499 | orchestrator | 2026-04-09 03:13:48.540507 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-04-09 03:13:48.540516 | orchestrator | Thursday 09 April 2026 03:13:36 +0000 (0:00:23.894) 0:03:05.796 ******** 2026-04-09 03:13:48.540525 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-04-09 03:13:48.540598 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-04-09 03:13:48.540618 | orchestrator | 2026-04-09 03:13:48.540633 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-04-09 03:13:48.540645 | orchestrator | Thursday 09 April 2026 03:13:43 +0000 (0:00:06.615) 0:03:12.412 ******** 2026-04-09 03:13:48.540660 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:13:48.540674 | orchestrator | 2026-04-09 03:13:48.540689 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-04-09 03:13:48.540705 | orchestrator | Thursday 09 April 2026 03:13:43 +0000 (0:00:00.161) 0:03:12.573 ******** 2026-04-09 03:13:48.540720 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:13:48.540735 | orchestrator | 2026-04-09 03:13:48.540749 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-04-09 03:13:48.540764 | orchestrator | Thursday 09 April 2026 03:13:43 +0000 (0:00:00.118) 0:03:12.691 ******** 2026-04-09 03:13:48.540782 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:13:48.540804 | orchestrator | 2026-04-09 03:13:48.540836 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-04-09 03:13:48.540851 | orchestrator | Thursday 09 April 2026 03:13:43 +0000 (0:00:00.144) 0:03:12.835 ******** 2026-04-09 03:13:48.540863 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:13:48.540877 | orchestrator | 2026-04-09 03:13:48.540891 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-04-09 03:13:48.540906 | orchestrator | Thursday 09 April 2026 03:13:44 +0000 (0:00:00.708) 0:03:13.544 ******** 2026-04-09 03:13:48.540921 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:13:48.540935 | orchestrator | 2026-04-09 03:13:48.540949 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-09 03:13:48.540964 | orchestrator | Thursday 09 April 2026 03:13:47 +0000 (0:00:03.276) 0:03:16.820 ******** 2026-04-09 03:13:48.540979 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:13:48.540993 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:13:48.541009 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:13:48.541020 | orchestrator | 2026-04-09 03:13:48.541028 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 03:13:48.541038 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 03:13:48.541048 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 03:13:48.541057 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 03:13:48.541065 | orchestrator | 2026-04-09 03:13:48.541074 | orchestrator | 2026-04-09 03:13:48.541083 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 03:13:48.541092 | orchestrator | Thursday 09 April 2026 03:13:48 +0000 (0:00:00.480) 0:03:17.301 ******** 2026-04-09 03:13:48.541101 | orchestrator | =============================================================================== 2026-04-09 03:13:48.541109 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 48.80s 2026-04-09 03:13:48.541118 | orchestrator | service-ks-register : keystone | Creating services --------------------- 23.89s 2026-04-09 03:13:48.541127 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.08s 2026-04-09 03:13:48.541135 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.94s 2026-04-09 03:13:48.541144 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.74s 2026-04-09 03:13:48.541152 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.12s 2026-04-09 03:13:48.541161 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.49s 2026-04-09 03:13:48.541169 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.80s 2026-04-09 03:13:48.541189 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.62s 2026-04-09 03:13:48.541247 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.96s 2026-04-09 03:13:48.541258 | orchestrator | keystone : Creating default user role ----------------------------------- 3.28s 2026-04-09 03:13:48.541267 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.24s 2026-04-09 03:13:48.541276 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.23s 2026-04-09 03:13:48.541284 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.73s 2026-04-09 03:13:48.541293 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.32s 2026-04-09 03:13:48.541301 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.28s 2026-04-09 03:13:48.541310 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.25s 2026-04-09 03:13:48.541319 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.94s 2026-04-09 03:13:48.541327 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.89s 2026-04-09 03:13:48.541336 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.70s 2026-04-09 03:13:51.308168 | orchestrator | 2026-04-09 03:13:51 | INFO  | Task 2abde736-4ad2-41a1-ba59-a4115908b1cf (placement) was prepared for execution. 2026-04-09 03:13:51.308329 | orchestrator | 2026-04-09 03:13:51 | INFO  | It takes a moment until task 2abde736-4ad2-41a1-ba59-a4115908b1cf (placement) has been started and output is visible here. 2026-04-09 03:14:27.878778 | orchestrator | 2026-04-09 03:14:27.878933 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 03:14:27.878961 | orchestrator | 2026-04-09 03:14:27.878978 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 03:14:27.878995 | orchestrator | Thursday 09 April 2026 03:13:56 +0000 (0:00:00.306) 0:00:00.306 ******** 2026-04-09 03:14:27.879011 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:14:27.879027 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:14:27.879045 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:14:27.879061 | orchestrator | 2026-04-09 03:14:27.879079 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 03:14:27.879094 | orchestrator | Thursday 09 April 2026 03:13:56 +0000 (0:00:00.364) 0:00:00.670 ******** 2026-04-09 03:14:27.879112 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-09 03:14:27.879129 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-09 03:14:27.879144 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-09 03:14:27.879160 | orchestrator | 2026-04-09 03:14:27.879316 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-09 03:14:27.879340 | orchestrator | 2026-04-09 03:14:27.879357 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-09 03:14:27.879375 | orchestrator | Thursday 09 April 2026 03:13:57 +0000 (0:00:00.503) 0:00:01.174 ******** 2026-04-09 03:14:27.879395 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:14:27.879414 | orchestrator | 2026-04-09 03:14:27.879432 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-04-09 03:14:27.879450 | orchestrator | Thursday 09 April 2026 03:13:57 +0000 (0:00:00.596) 0:00:01.770 ******** 2026-04-09 03:14:27.879469 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-04-09 03:14:27.879488 | orchestrator | 2026-04-09 03:14:27.879506 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-04-09 03:14:27.879525 | orchestrator | Thursday 09 April 2026 03:14:01 +0000 (0:00:03.980) 0:00:05.751 ******** 2026-04-09 03:14:27.879543 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-09 03:14:27.879595 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-09 03:14:27.879615 | orchestrator | 2026-04-09 03:14:27.879631 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-09 03:14:27.879648 | orchestrator | Thursday 09 April 2026 03:14:08 +0000 (0:00:06.560) 0:00:12.311 ******** 2026-04-09 03:14:27.879664 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-04-09 03:14:27.879682 | orchestrator | 2026-04-09 03:14:27.879700 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-09 03:14:27.879717 | orchestrator | Thursday 09 April 2026 03:14:12 +0000 (0:00:03.800) 0:00:16.112 ******** 2026-04-09 03:14:27.879736 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 03:14:27.879753 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-09 03:14:27.879770 | orchestrator | 2026-04-09 03:14:27.879787 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-09 03:14:27.879804 | orchestrator | Thursday 09 April 2026 03:14:16 +0000 (0:00:04.091) 0:00:20.203 ******** 2026-04-09 03:14:27.879821 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 03:14:27.879838 | orchestrator | 2026-04-09 03:14:27.879855 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-04-09 03:14:27.879872 | orchestrator | Thursday 09 April 2026 03:14:19 +0000 (0:00:03.128) 0:00:23.332 ******** 2026-04-09 03:14:27.879889 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-09 03:14:27.879906 | orchestrator | 2026-04-09 03:14:27.879923 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-09 03:14:27.879941 | orchestrator | Thursday 09 April 2026 03:14:23 +0000 (0:00:03.807) 0:00:27.140 ******** 2026-04-09 03:14:27.879958 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:14:27.879976 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:14:27.879995 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:14:27.880012 | orchestrator | 2026-04-09 03:14:27.880028 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-09 03:14:27.880045 | orchestrator | Thursday 09 April 2026 03:14:23 +0000 (0:00:00.327) 0:00:27.467 ******** 2026-04-09 03:14:27.880068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 03:14:27.880135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 03:14:27.880319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 03:14:27.880345 | orchestrator | 2026-04-09 03:14:27.880361 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-09 03:14:27.880378 | orchestrator | Thursday 09 April 2026 03:14:24 +0000 (0:00:01.206) 0:00:28.673 ******** 2026-04-09 03:14:27.880393 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:14:27.880408 | orchestrator | 2026-04-09 03:14:27.880423 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-09 03:14:27.880438 | orchestrator | Thursday 09 April 2026 03:14:25 +0000 (0:00:00.373) 0:00:29.047 ******** 2026-04-09 03:14:27.880454 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:14:27.880469 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:14:27.880484 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:14:27.880499 | orchestrator | 2026-04-09 03:14:27.880514 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-09 03:14:27.880528 | orchestrator | Thursday 09 April 2026 03:14:25 +0000 (0:00:00.384) 0:00:29.432 ******** 2026-04-09 03:14:27.880543 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:14:27.880559 | orchestrator | 2026-04-09 03:14:27.880575 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-09 03:14:27.880589 | orchestrator | Thursday 09 April 2026 03:14:26 +0000 (0:00:00.611) 0:00:30.043 ******** 2026-04-09 03:14:27.880605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 03:14:27.880641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 03:14:30.988983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 03:14:30.989080 | orchestrator | 2026-04-09 03:14:30.989097 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-09 03:14:30.989109 | orchestrator | Thursday 09 April 2026 03:14:27 +0000 (0:00:01.704) 0:00:31.748 ******** 2026-04-09 03:14:30.989122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-09 03:14:30.989130 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:14:30.989137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-09 03:14:30.989144 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:14:30.989154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-09 03:14:30.989260 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:14:30.989273 | orchestrator | 2026-04-09 03:14:30.989284 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-09 03:14:30.989313 | orchestrator | Thursday 09 April 2026 03:14:28 +0000 (0:00:00.632) 0:00:32.380 ******** 2026-04-09 03:14:30.989332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-09 03:14:30.989344 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:14:30.989354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-09 03:14:30.989364 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:14:30.989375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-09 03:14:30.989385 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:14:30.989395 | orchestrator | 2026-04-09 03:14:30.989405 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-09 03:14:30.989414 | orchestrator | Thursday 09 April 2026 03:14:29 +0000 (0:00:00.762) 0:00:33.143 ******** 2026-04-09 03:14:30.989420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 03:14:30.989451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 03:14:38.207834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 03:14:38.207960 | orchestrator | 2026-04-09 03:14:38.207974 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-09 03:14:38.207983 | orchestrator | Thursday 09 April 2026 03:14:30 +0000 (0:00:01.722) 0:00:34.865 ******** 2026-04-09 03:14:38.207991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 03:14:38.207999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 03:14:38.208039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 03:14:38.208054 | orchestrator | 2026-04-09 03:14:38.208061 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-09 03:14:38.208069 | orchestrator | Thursday 09 April 2026 03:14:33 +0000 (0:00:02.328) 0:00:37.194 ******** 2026-04-09 03:14:38.208090 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-09 03:14:38.208099 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-09 03:14:38.208106 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-09 03:14:38.208113 | orchestrator | 2026-04-09 03:14:38.208119 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-09 03:14:38.208126 | orchestrator | Thursday 09 April 2026 03:14:34 +0000 (0:00:01.568) 0:00:38.762 ******** 2026-04-09 03:14:38.208133 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:14:38.208141 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:14:38.208147 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:14:38.208154 | orchestrator | 2026-04-09 03:14:38.208194 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-09 03:14:38.208203 | orchestrator | Thursday 09 April 2026 03:14:36 +0000 (0:00:01.425) 0:00:40.187 ******** 2026-04-09 03:14:38.208210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-09 03:14:38.208217 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:14:38.208224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-09 03:14:38.208239 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:14:38.208246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-09 03:14:38.208253 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:14:38.208259 | orchestrator | 2026-04-09 03:14:38.208266 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-04-09 03:14:38.208277 | orchestrator | Thursday 09 April 2026 03:14:37 +0000 (0:00:00.799) 0:00:40.986 ******** 2026-04-09 03:14:38.208291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 03:15:08.431937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 03:15:08.432081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 03:15:08.432100 | orchestrator | 2026-04-09 03:15:08.432115 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-09 03:15:08.432128 | orchestrator | Thursday 09 April 2026 03:14:38 +0000 (0:00:01.102) 0:00:42.088 ******** 2026-04-09 03:15:08.432227 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:15:08.432241 | orchestrator | 2026-04-09 03:15:08.432253 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-09 03:15:08.432264 | orchestrator | Thursday 09 April 2026 03:14:40 +0000 (0:00:02.108) 0:00:44.197 ******** 2026-04-09 03:15:08.432274 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:15:08.432286 | orchestrator | 2026-04-09 03:15:08.432298 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-09 03:15:08.432309 | orchestrator | Thursday 09 April 2026 03:14:42 +0000 (0:00:02.315) 0:00:46.512 ******** 2026-04-09 03:15:08.432320 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:15:08.432331 | orchestrator | 2026-04-09 03:15:08.432342 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-09 03:15:08.432353 | orchestrator | Thursday 09 April 2026 03:14:57 +0000 (0:00:14.627) 0:01:01.140 ******** 2026-04-09 03:15:08.432366 | orchestrator | 2026-04-09 03:15:08.432385 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-09 03:15:08.432403 | orchestrator | Thursday 09 April 2026 03:14:57 +0000 (0:00:00.075) 0:01:01.215 ******** 2026-04-09 03:15:08.432420 | orchestrator | 2026-04-09 03:15:08.432437 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-09 03:15:08.432455 | orchestrator | Thursday 09 April 2026 03:14:57 +0000 (0:00:00.071) 0:01:01.287 ******** 2026-04-09 03:15:08.432473 | orchestrator | 2026-04-09 03:15:08.432493 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-09 03:15:08.432511 | orchestrator | Thursday 09 April 2026 03:14:57 +0000 (0:00:00.073) 0:01:01.360 ******** 2026-04-09 03:15:08.432529 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:15:08.432566 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:15:08.432584 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:15:08.432602 | orchestrator | 2026-04-09 03:15:08.432620 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 03:15:08.432641 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 03:15:08.432662 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 03:15:08.432681 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 03:15:08.432701 | orchestrator | 2026-04-09 03:15:08.432715 | orchestrator | 2026-04-09 03:15:08.432726 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 03:15:08.432737 | orchestrator | Thursday 09 April 2026 03:15:08 +0000 (0:00:10.548) 0:01:11.908 ******** 2026-04-09 03:15:08.432761 | orchestrator | =============================================================================== 2026-04-09 03:15:08.432772 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.63s 2026-04-09 03:15:08.432803 | orchestrator | placement : Restart placement-api container ---------------------------- 10.55s 2026-04-09 03:15:08.432818 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.56s 2026-04-09 03:15:08.432837 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.09s 2026-04-09 03:15:08.432854 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.98s 2026-04-09 03:15:08.432871 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.81s 2026-04-09 03:15:08.432889 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.80s 2026-04-09 03:15:08.432906 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.13s 2026-04-09 03:15:08.432941 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.33s 2026-04-09 03:15:08.432960 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.32s 2026-04-09 03:15:08.432991 | orchestrator | placement : Creating placement databases -------------------------------- 2.11s 2026-04-09 03:15:08.433008 | orchestrator | placement : Copying over config.json files for services ----------------- 1.72s 2026-04-09 03:15:08.433027 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.70s 2026-04-09 03:15:08.433045 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.57s 2026-04-09 03:15:08.433062 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.43s 2026-04-09 03:15:08.433080 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.21s 2026-04-09 03:15:08.433100 | orchestrator | placement : Check placement containers ---------------------------------- 1.10s 2026-04-09 03:15:08.433118 | orchestrator | placement : Copying over existing policy file --------------------------- 0.80s 2026-04-09 03:15:08.433163 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.76s 2026-04-09 03:15:08.433175 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.63s 2026-04-09 03:15:11.121883 | orchestrator | 2026-04-09 03:15:11 | INFO  | Task f6ba0e95-e1cc-42e3-83fa-8816503cb40c (neutron) was prepared for execution. 2026-04-09 03:15:11.121974 | orchestrator | 2026-04-09 03:15:11 | INFO  | It takes a moment until task f6ba0e95-e1cc-42e3-83fa-8816503cb40c (neutron) has been started and output is visible here. 2026-04-09 03:16:03.078906 | orchestrator | 2026-04-09 03:16:03.078972 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 03:16:03.078984 | orchestrator | 2026-04-09 03:16:03.078992 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 03:16:03.079000 | orchestrator | Thursday 09 April 2026 03:15:15 +0000 (0:00:00.416) 0:00:00.416 ******** 2026-04-09 03:16:03.079007 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:16:03.079016 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:16:03.079024 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:16:03.079028 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:16:03.079032 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:16:03.079036 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:16:03.079040 | orchestrator | 2026-04-09 03:16:03.079044 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 03:16:03.079048 | orchestrator | Thursday 09 April 2026 03:15:16 +0000 (0:00:00.847) 0:00:01.263 ******** 2026-04-09 03:16:03.079052 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-09 03:16:03.079057 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-09 03:16:03.079061 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-09 03:16:03.079064 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-09 03:16:03.079068 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-09 03:16:03.079107 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-09 03:16:03.079113 | orchestrator | 2026-04-09 03:16:03.079117 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-09 03:16:03.079121 | orchestrator | 2026-04-09 03:16:03.079125 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-09 03:16:03.079129 | orchestrator | Thursday 09 April 2026 03:15:17 +0000 (0:00:00.765) 0:00:02.029 ******** 2026-04-09 03:16:03.079140 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:16:03.079144 | orchestrator | 2026-04-09 03:16:03.079148 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-09 03:16:03.079152 | orchestrator | Thursday 09 April 2026 03:15:19 +0000 (0:00:01.525) 0:00:03.555 ******** 2026-04-09 03:16:03.079157 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:16:03.079166 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:16:03.079176 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:16:03.079183 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:16:03.079190 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:16:03.079198 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:16:03.079206 | orchestrator | 2026-04-09 03:16:03.079212 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-09 03:16:03.079216 | orchestrator | Thursday 09 April 2026 03:15:20 +0000 (0:00:01.411) 0:00:04.966 ******** 2026-04-09 03:16:03.079220 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:16:03.079223 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:16:03.079227 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:16:03.079231 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:16:03.079235 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:16:03.079239 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:16:03.079243 | orchestrator | 2026-04-09 03:16:03.079247 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-09 03:16:03.079251 | orchestrator | Thursday 09 April 2026 03:15:21 +0000 (0:00:01.212) 0:00:06.178 ******** 2026-04-09 03:16:03.079255 | orchestrator | ok: [testbed-node-0] => { 2026-04-09 03:16:03.079259 | orchestrator |  "changed": false, 2026-04-09 03:16:03.079263 | orchestrator |  "msg": "All assertions passed" 2026-04-09 03:16:03.079267 | orchestrator | } 2026-04-09 03:16:03.079271 | orchestrator | ok: [testbed-node-1] => { 2026-04-09 03:16:03.079275 | orchestrator |  "changed": false, 2026-04-09 03:16:03.079279 | orchestrator |  "msg": "All assertions passed" 2026-04-09 03:16:03.079283 | orchestrator | } 2026-04-09 03:16:03.079287 | orchestrator | ok: [testbed-node-2] => { 2026-04-09 03:16:03.079291 | orchestrator |  "changed": false, 2026-04-09 03:16:03.079295 | orchestrator |  "msg": "All assertions passed" 2026-04-09 03:16:03.079298 | orchestrator | } 2026-04-09 03:16:03.079302 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 03:16:03.079306 | orchestrator |  "changed": false, 2026-04-09 03:16:03.079310 | orchestrator |  "msg": "All assertions passed" 2026-04-09 03:16:03.079314 | orchestrator | } 2026-04-09 03:16:03.079318 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 03:16:03.079322 | orchestrator |  "changed": false, 2026-04-09 03:16:03.079326 | orchestrator |  "msg": "All assertions passed" 2026-04-09 03:16:03.079330 | orchestrator | } 2026-04-09 03:16:03.079334 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 03:16:03.079338 | orchestrator |  "changed": false, 2026-04-09 03:16:03.079342 | orchestrator |  "msg": "All assertions passed" 2026-04-09 03:16:03.079353 | orchestrator | } 2026-04-09 03:16:03.079361 | orchestrator | 2026-04-09 03:16:03.079365 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-09 03:16:03.079369 | orchestrator | Thursday 09 April 2026 03:15:22 +0000 (0:00:01.028) 0:00:07.207 ******** 2026-04-09 03:16:03.079373 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:16:03.079376 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:16:03.079380 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:16:03.079389 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:16:03.079393 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:16:03.079397 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:16:03.079401 | orchestrator | 2026-04-09 03:16:03.079405 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-04-09 03:16:03.079409 | orchestrator | Thursday 09 April 2026 03:15:23 +0000 (0:00:00.723) 0:00:07.931 ******** 2026-04-09 03:16:03.079413 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-04-09 03:16:03.079417 | orchestrator | 2026-04-09 03:16:03.079421 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-04-09 03:16:03.079425 | orchestrator | Thursday 09 April 2026 03:15:27 +0000 (0:00:03.752) 0:00:11.683 ******** 2026-04-09 03:16:03.079429 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-04-09 03:16:03.079433 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-04-09 03:16:03.079437 | orchestrator | 2026-04-09 03:16:03.079450 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-04-09 03:16:03.079454 | orchestrator | Thursday 09 April 2026 03:15:33 +0000 (0:00:06.533) 0:00:18.216 ******** 2026-04-09 03:16:03.079458 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 03:16:03.079462 | orchestrator | 2026-04-09 03:16:03.079466 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-04-09 03:16:03.079470 | orchestrator | Thursday 09 April 2026 03:15:36 +0000 (0:00:03.147) 0:00:21.364 ******** 2026-04-09 03:16:03.079474 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 03:16:03.079478 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-04-09 03:16:03.079482 | orchestrator | 2026-04-09 03:16:03.079486 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-04-09 03:16:03.079490 | orchestrator | Thursday 09 April 2026 03:15:40 +0000 (0:00:03.916) 0:00:25.280 ******** 2026-04-09 03:16:03.079494 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 03:16:03.079498 | orchestrator | 2026-04-09 03:16:03.079502 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-04-09 03:16:03.079506 | orchestrator | Thursday 09 April 2026 03:15:44 +0000 (0:00:03.590) 0:00:28.870 ******** 2026-04-09 03:16:03.079510 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-04-09 03:16:03.079514 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-04-09 03:16:03.079518 | orchestrator | 2026-04-09 03:16:03.079522 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-09 03:16:03.079526 | orchestrator | Thursday 09 April 2026 03:15:52 +0000 (0:00:07.902) 0:00:36.772 ******** 2026-04-09 03:16:03.079530 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:16:03.079534 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:16:03.079538 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:16:03.079542 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:16:03.079548 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:16:03.079552 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:16:03.079556 | orchestrator | 2026-04-09 03:16:03.079560 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-09 03:16:03.079564 | orchestrator | Thursday 09 April 2026 03:15:53 +0000 (0:00:00.942) 0:00:37.715 ******** 2026-04-09 03:16:03.079568 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:16:03.079572 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:16:03.079576 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:16:03.079580 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:16:03.079584 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:16:03.079588 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:16:03.079592 | orchestrator | 2026-04-09 03:16:03.079596 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-09 03:16:03.079603 | orchestrator | Thursday 09 April 2026 03:15:56 +0000 (0:00:02.940) 0:00:40.655 ******** 2026-04-09 03:16:03.079607 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:16:03.079611 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:16:03.079614 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:16:03.079618 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:16:03.079622 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:16:03.079626 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:16:03.079630 | orchestrator | 2026-04-09 03:16:03.079634 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-09 03:16:03.079638 | orchestrator | Thursday 09 April 2026 03:15:57 +0000 (0:00:01.354) 0:00:42.009 ******** 2026-04-09 03:16:03.079642 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:16:03.079646 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:16:03.079650 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:16:03.079654 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:16:03.079658 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:16:03.079662 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:16:03.079665 | orchestrator | 2026-04-09 03:16:03.079669 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-09 03:16:03.079673 | orchestrator | Thursday 09 April 2026 03:16:00 +0000 (0:00:02.745) 0:00:44.755 ******** 2026-04-09 03:16:03.079679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 03:16:03.079689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 03:16:08.940439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 03:16:08.940584 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 03:16:08.940598 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 03:16:08.940605 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 03:16:08.940613 | orchestrator | 2026-04-09 03:16:08.940622 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-09 03:16:08.940631 | orchestrator | Thursday 09 April 2026 03:16:03 +0000 (0:00:02.759) 0:00:47.514 ******** 2026-04-09 03:16:08.940638 | orchestrator | [WARNING]: Skipped 2026-04-09 03:16:08.940645 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-09 03:16:08.940653 | orchestrator | due to this access issue: 2026-04-09 03:16:08.940661 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-09 03:16:08.940667 | orchestrator | a directory 2026-04-09 03:16:08.940674 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 03:16:08.940680 | orchestrator | 2026-04-09 03:16:08.940687 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-09 03:16:08.940693 | orchestrator | Thursday 09 April 2026 03:16:03 +0000 (0:00:00.831) 0:00:48.346 ******** 2026-04-09 03:16:08.940700 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:16:08.940707 | orchestrator | 2026-04-09 03:16:08.940713 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-09 03:16:08.940735 | orchestrator | Thursday 09 April 2026 03:16:05 +0000 (0:00:01.420) 0:00:49.767 ******** 2026-04-09 03:16:08.940748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 03:16:08.940764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 03:16:08.940770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 03:16:08.940777 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 03:16:08.940790 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 03:16:14.274743 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 03:16:14.274826 | orchestrator | 2026-04-09 03:16:14.274836 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-09 03:16:14.274844 | orchestrator | Thursday 09 April 2026 03:16:08 +0000 (0:00:03.600) 0:00:53.368 ******** 2026-04-09 03:16:14.274852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 03:16:14.274861 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:16:14.274869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 03:16:14.274876 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:16:14.274882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 03:16:14.274905 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:16:14.274924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 03:16:14.274931 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:16:14.274942 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 03:16:14.274949 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:16:14.274955 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 03:16:14.274962 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:16:14.274968 | orchestrator | 2026-04-09 03:16:14.274974 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-09 03:16:14.274981 | orchestrator | Thursday 09 April 2026 03:16:11 +0000 (0:00:02.308) 0:00:55.677 ******** 2026-04-09 03:16:14.274987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 03:16:14.274993 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:16:14.275004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 03:16:20.072854 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:16:20.073018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 03:16:20.073044 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:16:20.073109 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 03:16:20.073123 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:16:20.073135 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 03:16:20.073148 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:16:20.073160 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 03:16:20.073196 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:16:20.073208 | orchestrator | 2026-04-09 03:16:20.073221 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-09 03:16:20.073233 | orchestrator | Thursday 09 April 2026 03:16:14 +0000 (0:00:03.028) 0:00:58.705 ******** 2026-04-09 03:16:20.073244 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:16:20.073255 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:16:20.073266 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:16:20.073277 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:16:20.073288 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:16:20.073298 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:16:20.073309 | orchestrator | 2026-04-09 03:16:20.073320 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-09 03:16:20.073331 | orchestrator | Thursday 09 April 2026 03:16:16 +0000 (0:00:02.546) 0:01:01.252 ******** 2026-04-09 03:16:20.073342 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:16:20.073352 | orchestrator | 2026-04-09 03:16:20.073364 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-09 03:16:20.073396 | orchestrator | Thursday 09 April 2026 03:16:16 +0000 (0:00:00.163) 0:01:01.415 ******** 2026-04-09 03:16:20.073410 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:16:20.073422 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:16:20.073435 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:16:20.073448 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:16:20.073461 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:16:20.073473 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:16:20.073486 | orchestrator | 2026-04-09 03:16:20.073500 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-09 03:16:20.073513 | orchestrator | Thursday 09 April 2026 03:16:17 +0000 (0:00:00.662) 0:01:02.078 ******** 2026-04-09 03:16:20.073534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 03:16:20.073549 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:16:20.073564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 03:16:20.073589 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:16:20.073603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 03:16:20.073617 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:16:20.073631 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 03:16:20.073645 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:16:20.073672 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 03:16:29.101945 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:16:29.102117 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 03:16:29.102130 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:16:29.102134 | orchestrator | 2026-04-09 03:16:29.102140 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-09 03:16:29.102145 | orchestrator | Thursday 09 April 2026 03:16:20 +0000 (0:00:02.417) 0:01:04.495 ******** 2026-04-09 03:16:29.102150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 03:16:29.102174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 03:16:29.102179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 03:16:29.102205 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 03:16:29.102210 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 03:16:29.102219 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 03:16:29.102223 | orchestrator | 2026-04-09 03:16:29.102239 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-09 03:16:29.102253 | orchestrator | Thursday 09 April 2026 03:16:23 +0000 (0:00:03.369) 0:01:07.864 ******** 2026-04-09 03:16:29.102264 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 03:16:29.102270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 03:16:29.102287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 03:16:34.156462 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 03:16:34.156553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 03:16:34.156560 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 03:16:34.156565 | orchestrator | 2026-04-09 03:16:34.156570 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-09 03:16:34.156576 | orchestrator | Thursday 09 April 2026 03:16:29 +0000 (0:00:05.669) 0:01:13.534 ******** 2026-04-09 03:16:34.156590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 03:16:34.156595 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:16:34.156610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 03:16:34.156619 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:16:34.156623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 03:16:34.156627 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:16:34.156631 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 03:16:34.156635 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:16:34.156639 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 03:16:34.156644 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:16:34.156651 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 03:16:34.156655 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:16:34.156659 | orchestrator | 2026-04-09 03:16:34.156663 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-09 03:16:34.156672 | orchestrator | Thursday 09 April 2026 03:16:31 +0000 (0:00:02.272) 0:01:15.807 ******** 2026-04-09 03:16:34.156676 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:16:34.156680 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:16:34.156684 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:16:34.156696 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:16:34.156700 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:16:34.156707 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:16:54.680183 | orchestrator | 2026-04-09 03:16:54.680319 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-09 03:16:54.680331 | orchestrator | Thursday 09 April 2026 03:16:34 +0000 (0:00:02.778) 0:01:18.585 ******** 2026-04-09 03:16:54.680342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 03:16:54.680352 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:16:54.680361 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 03:16:54.680369 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:16:54.680376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 03:16:54.680383 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:16:54.680405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 03:16:54.680448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 03:16:54.680457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 03:16:54.680464 | orchestrator | 2026-04-09 03:16:54.680470 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-04-09 03:16:54.680477 | orchestrator | Thursday 09 April 2026 03:16:37 +0000 (0:00:03.527) 0:01:22.112 ******** 2026-04-09 03:16:54.680484 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:16:54.680490 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:16:54.680497 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:16:54.680504 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:16:54.680510 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:16:54.680517 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:16:54.680523 | orchestrator | 2026-04-09 03:16:54.680530 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-09 03:16:54.680537 | orchestrator | Thursday 09 April 2026 03:16:40 +0000 (0:00:02.729) 0:01:24.842 ******** 2026-04-09 03:16:54.680544 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:16:54.680550 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:16:54.680557 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:16:54.680564 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:16:54.680570 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:16:54.680577 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:16:54.680583 | orchestrator | 2026-04-09 03:16:54.680590 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-09 03:16:54.680597 | orchestrator | Thursday 09 April 2026 03:16:42 +0000 (0:00:02.256) 0:01:27.099 ******** 2026-04-09 03:16:54.680603 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:16:54.680611 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:16:54.680617 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:16:54.680624 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:16:54.680631 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:16:54.680637 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:16:54.680644 | orchestrator | 2026-04-09 03:16:54.680651 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-09 03:16:54.680663 | orchestrator | Thursday 09 April 2026 03:16:45 +0000 (0:00:02.405) 0:01:29.504 ******** 2026-04-09 03:16:54.680669 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:16:54.680676 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:16:54.680683 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:16:54.680689 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:16:54.680695 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:16:54.680702 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:16:54.680709 | orchestrator | 2026-04-09 03:16:54.680715 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-09 03:16:54.680722 | orchestrator | Thursday 09 April 2026 03:16:47 +0000 (0:00:02.219) 0:01:31.724 ******** 2026-04-09 03:16:54.680729 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:16:54.680735 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:16:54.680742 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:16:54.680749 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:16:54.680757 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:16:54.680764 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:16:54.680771 | orchestrator | 2026-04-09 03:16:54.680779 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-09 03:16:54.680786 | orchestrator | Thursday 09 April 2026 03:16:49 +0000 (0:00:02.706) 0:01:34.431 ******** 2026-04-09 03:16:54.680793 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:16:54.680800 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:16:54.680807 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:16:54.680819 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:16:54.680826 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:16:54.680833 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:16:54.680841 | orchestrator | 2026-04-09 03:16:54.680848 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-09 03:16:54.680856 | orchestrator | Thursday 09 April 2026 03:16:52 +0000 (0:00:02.376) 0:01:36.807 ******** 2026-04-09 03:16:54.680864 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-09 03:16:54.680873 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-09 03:16:54.680880 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:16:54.680887 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:16:54.680894 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-09 03:16:54.680906 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:16:59.528828 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-09 03:16:59.528928 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:16:59.528948 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-09 03:16:59.528962 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:16:59.528976 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-09 03:16:59.528991 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:16:59.529006 | orchestrator | 2026-04-09 03:16:59.529022 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-09 03:16:59.529038 | orchestrator | Thursday 09 April 2026 03:16:54 +0000 (0:00:02.299) 0:01:39.107 ******** 2026-04-09 03:16:59.529058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 03:16:59.529110 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:16:59.529127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 03:16:59.529144 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:16:59.529154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 03:16:59.529163 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:16:59.529205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 03:16:59.529216 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:16:59.529225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 03:16:59.529242 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:16:59.529251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 03:16:59.529290 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:16:59.529299 | orchestrator | 2026-04-09 03:16:59.529308 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-09 03:16:59.529317 | orchestrator | Thursday 09 April 2026 03:16:56 +0000 (0:00:02.305) 0:01:41.413 ******** 2026-04-09 03:16:59.529326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 03:16:59.529335 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:16:59.529350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 03:16:59.529359 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:16:59.529376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 03:17:28.135837 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:17:28.135957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 03:17:28.135987 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:17:28.136003 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 03:17:28.136019 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:17:28.136033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 03:17:28.136047 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:17:28.136062 | orchestrator | 2026-04-09 03:17:28.136079 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-09 03:17:28.136096 | orchestrator | Thursday 09 April 2026 03:16:59 +0000 (0:00:02.548) 0:01:43.961 ******** 2026-04-09 03:17:28.136111 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:17:28.136125 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:17:28.136139 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:17:28.136155 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:17:28.136167 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:17:28.136176 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:17:28.136185 | orchestrator | 2026-04-09 03:17:28.136210 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-09 03:17:28.136223 | orchestrator | Thursday 09 April 2026 03:17:01 +0000 (0:00:02.375) 0:01:46.337 ******** 2026-04-09 03:17:28.136237 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:17:28.136251 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:17:28.136263 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:17:28.136276 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:17:28.136288 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:17:28.136301 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:17:28.136371 | orchestrator | 2026-04-09 03:17:28.136390 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-09 03:17:28.136434 | orchestrator | Thursday 09 April 2026 03:17:06 +0000 (0:00:04.120) 0:01:50.458 ******** 2026-04-09 03:17:28.136448 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:17:28.136465 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:17:28.136479 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:17:28.136493 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:17:28.136508 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:17:28.136521 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:17:28.136537 | orchestrator | 2026-04-09 03:17:28.136553 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-09 03:17:28.136568 | orchestrator | Thursday 09 April 2026 03:17:08 +0000 (0:00:02.358) 0:01:52.816 ******** 2026-04-09 03:17:28.136584 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:17:28.136594 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:17:28.136604 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:17:28.136614 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:17:28.136624 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:17:28.136634 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:17:28.136644 | orchestrator | 2026-04-09 03:17:28.136654 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-09 03:17:28.136686 | orchestrator | Thursday 09 April 2026 03:17:11 +0000 (0:00:02.641) 0:01:55.457 ******** 2026-04-09 03:17:28.136697 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:17:28.136707 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:17:28.136717 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:17:28.136727 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:17:28.136737 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:17:28.136747 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:17:28.136757 | orchestrator | 2026-04-09 03:17:28.136767 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-09 03:17:28.136778 | orchestrator | Thursday 09 April 2026 03:17:13 +0000 (0:00:02.460) 0:01:57.918 ******** 2026-04-09 03:17:28.136787 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:17:28.136796 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:17:28.136805 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:17:28.136813 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:17:28.136822 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:17:28.136830 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:17:28.136838 | orchestrator | 2026-04-09 03:17:28.136847 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-09 03:17:28.136856 | orchestrator | Thursday 09 April 2026 03:17:15 +0000 (0:00:02.380) 0:02:00.299 ******** 2026-04-09 03:17:28.136864 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:17:28.136873 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:17:28.136882 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:17:28.136890 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:17:28.136899 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:17:28.136907 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:17:28.136916 | orchestrator | 2026-04-09 03:17:28.136924 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-09 03:17:28.136933 | orchestrator | Thursday 09 April 2026 03:17:18 +0000 (0:00:02.339) 0:02:02.638 ******** 2026-04-09 03:17:28.136942 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:17:28.136950 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:17:28.136959 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:17:28.136967 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:17:28.136976 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:17:28.136984 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:17:28.136993 | orchestrator | 2026-04-09 03:17:28.137001 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-09 03:17:28.137010 | orchestrator | Thursday 09 April 2026 03:17:20 +0000 (0:00:02.293) 0:02:04.932 ******** 2026-04-09 03:17:28.137018 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:17:28.137036 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:17:28.137044 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:17:28.137058 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:17:28.137072 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:17:28.137086 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:17:28.137100 | orchestrator | 2026-04-09 03:17:28.137114 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-09 03:17:28.137129 | orchestrator | Thursday 09 April 2026 03:17:23 +0000 (0:00:02.583) 0:02:07.516 ******** 2026-04-09 03:17:28.137142 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-09 03:17:28.137157 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:17:28.137172 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-09 03:17:28.137186 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:17:28.137200 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-09 03:17:28.137214 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:17:28.137230 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-09 03:17:28.137245 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:17:28.137259 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-09 03:17:28.137274 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-09 03:17:28.137300 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:17:28.137339 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:17:28.137354 | orchestrator | 2026-04-09 03:17:28.137369 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-09 03:17:28.137383 | orchestrator | Thursday 09 April 2026 03:17:25 +0000 (0:00:02.022) 0:02:09.538 ******** 2026-04-09 03:17:28.137399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 03:17:28.137411 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:17:28.137431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 03:17:30.934484 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:17:30.934682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 03:17:30.934720 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:17:30.935697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 03:17:30.935747 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:17:30.935788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 03:17:30.935805 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:17:30.935815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 03:17:30.935825 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:17:30.935835 | orchestrator | 2026-04-09 03:17:30.935846 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-04-09 03:17:30.935857 | orchestrator | Thursday 09 April 2026 03:17:28 +0000 (0:00:03.028) 0:02:12.567 ******** 2026-04-09 03:17:30.935892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 03:17:30.935919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 03:17:30.935935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 03:17:30.935946 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 03:17:30.935957 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 03:17:30.935980 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 03:19:59.702275 | orchestrator | 2026-04-09 03:19:59.702358 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-09 03:19:59.702365 | orchestrator | Thursday 09 April 2026 03:17:30 +0000 (0:00:02.802) 0:02:15.369 ******** 2026-04-09 03:19:59.702370 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:19:59.702375 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:19:59.702379 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:19:59.702383 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:19:59.702387 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:19:59.702391 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:19:59.702395 | orchestrator | 2026-04-09 03:19:59.702399 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-04-09 03:19:59.702402 | orchestrator | Thursday 09 April 2026 03:17:31 +0000 (0:00:00.872) 0:02:16.241 ******** 2026-04-09 03:19:59.702406 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:19:59.702410 | orchestrator | 2026-04-09 03:19:59.702414 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-04-09 03:19:59.702418 | orchestrator | Thursday 09 April 2026 03:17:33 +0000 (0:00:02.119) 0:02:18.360 ******** 2026-04-09 03:19:59.702453 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:19:59.702457 | orchestrator | 2026-04-09 03:19:59.702461 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-04-09 03:19:59.702465 | orchestrator | Thursday 09 April 2026 03:17:36 +0000 (0:00:02.145) 0:02:20.506 ******** 2026-04-09 03:19:59.702469 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:19:59.702472 | orchestrator | 2026-04-09 03:19:59.702476 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 03:19:59.702480 | orchestrator | Thursday 09 April 2026 03:18:21 +0000 (0:00:45.582) 0:03:06.088 ******** 2026-04-09 03:19:59.702484 | orchestrator | 2026-04-09 03:19:59.702488 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 03:19:59.702492 | orchestrator | Thursday 09 April 2026 03:18:21 +0000 (0:00:00.085) 0:03:06.173 ******** 2026-04-09 03:19:59.702495 | orchestrator | 2026-04-09 03:19:59.702499 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 03:19:59.702503 | orchestrator | Thursday 09 April 2026 03:18:21 +0000 (0:00:00.087) 0:03:06.261 ******** 2026-04-09 03:19:59.702507 | orchestrator | 2026-04-09 03:19:59.702511 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 03:19:59.702515 | orchestrator | Thursday 09 April 2026 03:18:21 +0000 (0:00:00.072) 0:03:06.333 ******** 2026-04-09 03:19:59.702518 | orchestrator | 2026-04-09 03:19:59.702533 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 03:19:59.702537 | orchestrator | Thursday 09 April 2026 03:18:21 +0000 (0:00:00.073) 0:03:06.406 ******** 2026-04-09 03:19:59.702541 | orchestrator | 2026-04-09 03:19:59.702545 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 03:19:59.702548 | orchestrator | Thursday 09 April 2026 03:18:22 +0000 (0:00:00.092) 0:03:06.498 ******** 2026-04-09 03:19:59.702552 | orchestrator | 2026-04-09 03:19:59.702556 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-09 03:19:59.702560 | orchestrator | Thursday 09 April 2026 03:18:22 +0000 (0:00:00.084) 0:03:06.583 ******** 2026-04-09 03:19:59.702578 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:19:59.702582 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:19:59.702586 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:19:59.702590 | orchestrator | 2026-04-09 03:19:59.702593 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-09 03:19:59.702597 | orchestrator | Thursday 09 April 2026 03:18:52 +0000 (0:00:30.240) 0:03:36.823 ******** 2026-04-09 03:19:59.702601 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:19:59.702605 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:19:59.702608 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:19:59.702612 | orchestrator | 2026-04-09 03:19:59.702616 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 03:19:59.702621 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-09 03:19:59.702626 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-09 03:19:59.702630 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-09 03:19:59.702634 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-09 03:19:59.702638 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-09 03:19:59.702641 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-09 03:19:59.702652 | orchestrator | 2026-04-09 03:19:59.702656 | orchestrator | 2026-04-09 03:19:59.702660 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 03:19:59.702663 | orchestrator | Thursday 09 April 2026 03:19:59 +0000 (0:01:06.749) 0:04:43.572 ******** 2026-04-09 03:19:59.702673 | orchestrator | =============================================================================== 2026-04-09 03:19:59.702677 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 66.75s 2026-04-09 03:19:59.702680 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 45.58s 2026-04-09 03:19:59.702684 | orchestrator | neutron : Restart neutron-server container ----------------------------- 30.24s 2026-04-09 03:19:59.702698 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.90s 2026-04-09 03:19:59.702702 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.53s 2026-04-09 03:19:59.702706 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.67s 2026-04-09 03:19:59.702709 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.12s 2026-04-09 03:19:59.702713 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.92s 2026-04-09 03:19:59.702717 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.75s 2026-04-09 03:19:59.702720 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.60s 2026-04-09 03:19:59.702724 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.59s 2026-04-09 03:19:59.702728 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.53s 2026-04-09 03:19:59.702732 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.37s 2026-04-09 03:19:59.702736 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.15s 2026-04-09 03:19:59.702739 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.03s 2026-04-09 03:19:59.702743 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 3.03s 2026-04-09 03:19:59.702751 | orchestrator | Load and persist kernel modules ----------------------------------------- 2.94s 2026-04-09 03:19:59.702755 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.80s 2026-04-09 03:19:59.702759 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.78s 2026-04-09 03:19:59.702762 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.76s 2026-04-09 03:20:02.286476 | orchestrator | 2026-04-09 03:20:02 | INFO  | Task 128c2217-0e07-47a8-9727-56fce75d0e3f (nova) was prepared for execution. 2026-04-09 03:20:02.286574 | orchestrator | 2026-04-09 03:20:02 | INFO  | It takes a moment until task 128c2217-0e07-47a8-9727-56fce75d0e3f (nova) has been started and output is visible here. 2026-04-09 03:22:01.933292 | orchestrator | 2026-04-09 03:22:01.933451 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 03:22:01.933470 | orchestrator | 2026-04-09 03:22:01.933482 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-09 03:22:01.933493 | orchestrator | Thursday 09 April 2026 03:20:06 +0000 (0:00:00.319) 0:00:00.319 ******** 2026-04-09 03:22:01.933505 | orchestrator | changed: [testbed-manager] 2026-04-09 03:22:01.933517 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:22:01.933528 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:22:01.933539 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:22:01.933550 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:22:01.933561 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:22:01.933572 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:22:01.933583 | orchestrator | 2026-04-09 03:22:01.933594 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 03:22:01.933605 | orchestrator | Thursday 09 April 2026 03:20:07 +0000 (0:00:00.961) 0:00:01.281 ******** 2026-04-09 03:22:01.933616 | orchestrator | changed: [testbed-manager] 2026-04-09 03:22:01.933627 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:22:01.933638 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:22:01.933649 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:22:01.933660 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:22:01.933670 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:22:01.933682 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:22:01.933693 | orchestrator | 2026-04-09 03:22:01.933704 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 03:22:01.933715 | orchestrator | Thursday 09 April 2026 03:20:08 +0000 (0:00:00.975) 0:00:02.256 ******** 2026-04-09 03:22:01.933726 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-09 03:22:01.933737 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-09 03:22:01.933748 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-09 03:22:01.933759 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-09 03:22:01.933770 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-09 03:22:01.933780 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-09 03:22:01.933791 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-09 03:22:01.933802 | orchestrator | 2026-04-09 03:22:01.933813 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-09 03:22:01.933824 | orchestrator | 2026-04-09 03:22:01.933835 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-09 03:22:01.933846 | orchestrator | Thursday 09 April 2026 03:20:09 +0000 (0:00:00.816) 0:00:03.073 ******** 2026-04-09 03:22:01.933856 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:22:01.933867 | orchestrator | 2026-04-09 03:22:01.933878 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-09 03:22:01.933889 | orchestrator | Thursday 09 April 2026 03:20:10 +0000 (0:00:00.884) 0:00:03.957 ******** 2026-04-09 03:22:01.933901 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-04-09 03:22:01.933933 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-04-09 03:22:01.933944 | orchestrator | 2026-04-09 03:22:01.933955 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-09 03:22:01.933966 | orchestrator | Thursday 09 April 2026 03:20:14 +0000 (0:00:04.192) 0:00:08.150 ******** 2026-04-09 03:22:01.933977 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 03:22:01.933988 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 03:22:01.933999 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:22:01.934010 | orchestrator | 2026-04-09 03:22:01.934091 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-09 03:22:01.934103 | orchestrator | Thursday 09 April 2026 03:20:18 +0000 (0:00:04.098) 0:00:12.249 ******** 2026-04-09 03:22:01.934114 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:22:01.934126 | orchestrator | 2026-04-09 03:22:01.934136 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-09 03:22:01.934147 | orchestrator | Thursday 09 April 2026 03:20:19 +0000 (0:00:00.678) 0:00:12.928 ******** 2026-04-09 03:22:01.934158 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:22:01.934178 | orchestrator | 2026-04-09 03:22:01.934189 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-09 03:22:01.934200 | orchestrator | Thursday 09 April 2026 03:20:20 +0000 (0:00:01.326) 0:00:14.254 ******** 2026-04-09 03:22:01.934211 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:22:01.934222 | orchestrator | 2026-04-09 03:22:01.934233 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-09 03:22:01.934243 | orchestrator | Thursday 09 April 2026 03:20:23 +0000 (0:00:02.803) 0:00:17.058 ******** 2026-04-09 03:22:01.934254 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:22:01.934265 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:22:01.934276 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:22:01.934287 | orchestrator | 2026-04-09 03:22:01.934297 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-09 03:22:01.934308 | orchestrator | Thursday 09 April 2026 03:20:24 +0000 (0:00:00.361) 0:00:17.419 ******** 2026-04-09 03:22:01.934319 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:22:01.934330 | orchestrator | 2026-04-09 03:22:01.934341 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-09 03:22:01.934352 | orchestrator | Thursday 09 April 2026 03:20:56 +0000 (0:00:31.918) 0:00:49.338 ******** 2026-04-09 03:22:01.934362 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:22:01.934373 | orchestrator | 2026-04-09 03:22:01.934435 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-09 03:22:01.934446 | orchestrator | Thursday 09 April 2026 03:21:10 +0000 (0:00:14.845) 0:01:04.183 ******** 2026-04-09 03:22:01.934457 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:22:01.934468 | orchestrator | 2026-04-09 03:22:01.934478 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-09 03:22:01.934489 | orchestrator | Thursday 09 April 2026 03:21:23 +0000 (0:00:12.177) 0:01:16.361 ******** 2026-04-09 03:22:01.934519 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:22:01.934530 | orchestrator | 2026-04-09 03:22:01.934549 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-09 03:22:01.934560 | orchestrator | Thursday 09 April 2026 03:21:23 +0000 (0:00:00.736) 0:01:17.097 ******** 2026-04-09 03:22:01.934571 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:22:01.934581 | orchestrator | 2026-04-09 03:22:01.934592 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-09 03:22:01.934603 | orchestrator | Thursday 09 April 2026 03:21:24 +0000 (0:00:00.533) 0:01:17.630 ******** 2026-04-09 03:22:01.934614 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:22:01.934626 | orchestrator | 2026-04-09 03:22:01.934636 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-09 03:22:01.934657 | orchestrator | Thursday 09 April 2026 03:21:25 +0000 (0:00:00.774) 0:01:18.405 ******** 2026-04-09 03:22:01.934668 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:22:01.934679 | orchestrator | 2026-04-09 03:22:01.934689 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-09 03:22:01.934700 | orchestrator | Thursday 09 April 2026 03:21:42 +0000 (0:00:17.709) 0:01:36.115 ******** 2026-04-09 03:22:01.934711 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:22:01.934722 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:22:01.934733 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:22:01.934743 | orchestrator | 2026-04-09 03:22:01.934754 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-09 03:22:01.934765 | orchestrator | 2026-04-09 03:22:01.934776 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-09 03:22:01.934786 | orchestrator | Thursday 09 April 2026 03:21:43 +0000 (0:00:00.353) 0:01:36.469 ******** 2026-04-09 03:22:01.934797 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:22:01.934808 | orchestrator | 2026-04-09 03:22:01.934819 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-09 03:22:01.934830 | orchestrator | Thursday 09 April 2026 03:21:43 +0000 (0:00:00.858) 0:01:37.328 ******** 2026-04-09 03:22:01.934840 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:22:01.934851 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:22:01.934862 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:22:01.934873 | orchestrator | 2026-04-09 03:22:01.934884 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-09 03:22:01.934895 | orchestrator | Thursday 09 April 2026 03:21:45 +0000 (0:00:02.000) 0:01:39.328 ******** 2026-04-09 03:22:01.934906 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:22:01.934916 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:22:01.934927 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:22:01.934938 | orchestrator | 2026-04-09 03:22:01.934949 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-09 03:22:01.934960 | orchestrator | Thursday 09 April 2026 03:21:48 +0000 (0:00:02.066) 0:01:41.395 ******** 2026-04-09 03:22:01.934970 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:22:01.934981 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:22:01.934992 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:22:01.935003 | orchestrator | 2026-04-09 03:22:01.935014 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-09 03:22:01.935024 | orchestrator | Thursday 09 April 2026 03:21:48 +0000 (0:00:00.608) 0:01:42.004 ******** 2026-04-09 03:22:01.935035 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-09 03:22:01.935046 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:22:01.935057 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-09 03:22:01.935068 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:22:01.935078 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-09 03:22:01.935089 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-09 03:22:01.935100 | orchestrator | 2026-04-09 03:22:01.935111 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-09 03:22:01.935122 | orchestrator | Thursday 09 April 2026 03:21:56 +0000 (0:00:07.570) 0:01:49.575 ******** 2026-04-09 03:22:01.935133 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:22:01.935144 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:22:01.935155 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:22:01.935165 | orchestrator | 2026-04-09 03:22:01.935176 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-09 03:22:01.935187 | orchestrator | Thursday 09 April 2026 03:21:56 +0000 (0:00:00.402) 0:01:49.978 ******** 2026-04-09 03:22:01.935198 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-09 03:22:01.935209 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:22:01.935228 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-09 03:22:01.935238 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:22:01.935249 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-09 03:22:01.935260 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:22:01.935271 | orchestrator | 2026-04-09 03:22:01.935283 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-09 03:22:01.935303 | orchestrator | Thursday 09 April 2026 03:21:57 +0000 (0:00:01.182) 0:01:51.160 ******** 2026-04-09 03:22:01.935323 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:22:01.935343 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:22:01.935363 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:22:01.935406 | orchestrator | 2026-04-09 03:22:01.935426 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-09 03:22:01.935444 | orchestrator | Thursday 09 April 2026 03:21:58 +0000 (0:00:00.490) 0:01:51.650 ******** 2026-04-09 03:22:01.935456 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:22:01.935466 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:22:01.935477 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:22:01.935488 | orchestrator | 2026-04-09 03:22:01.935499 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-09 03:22:01.935510 | orchestrator | Thursday 09 April 2026 03:21:59 +0000 (0:00:00.985) 0:01:52.636 ******** 2026-04-09 03:22:01.935521 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:22:01.935532 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:22:01.935551 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:23:20.920709 | orchestrator | 2026-04-09 03:23:20.920797 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-09 03:23:20.920804 | orchestrator | Thursday 09 April 2026 03:22:01 +0000 (0:00:02.610) 0:01:55.247 ******** 2026-04-09 03:23:20.920808 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:23:20.920813 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:23:20.920817 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:23:20.920822 | orchestrator | 2026-04-09 03:23:20.920826 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-09 03:23:20.920830 | orchestrator | Thursday 09 April 2026 03:22:24 +0000 (0:00:22.310) 0:02:17.557 ******** 2026-04-09 03:23:20.920833 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:23:20.920837 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:23:20.920841 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:23:20.920844 | orchestrator | 2026-04-09 03:23:20.920848 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-09 03:23:20.920852 | orchestrator | Thursday 09 April 2026 03:22:35 +0000 (0:00:11.372) 0:02:28.930 ******** 2026-04-09 03:23:20.920857 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:23:20.920863 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:23:20.920869 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:23:20.920875 | orchestrator | 2026-04-09 03:23:20.920880 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-09 03:23:20.920886 | orchestrator | Thursday 09 April 2026 03:22:36 +0000 (0:00:01.152) 0:02:30.083 ******** 2026-04-09 03:23:20.920891 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:23:20.920897 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:23:20.920903 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:23:20.920908 | orchestrator | 2026-04-09 03:23:20.920914 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-09 03:23:20.920920 | orchestrator | Thursday 09 April 2026 03:22:49 +0000 (0:00:13.048) 0:02:43.131 ******** 2026-04-09 03:23:20.920926 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:23:20.920933 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:23:20.920939 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:23:20.920944 | orchestrator | 2026-04-09 03:23:20.920951 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-09 03:23:20.920956 | orchestrator | Thursday 09 April 2026 03:22:51 +0000 (0:00:01.218) 0:02:44.350 ******** 2026-04-09 03:23:20.920979 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:23:20.920983 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:23:20.920987 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:23:20.920991 | orchestrator | 2026-04-09 03:23:20.920994 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-09 03:23:20.920998 | orchestrator | 2026-04-09 03:23:20.921002 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-09 03:23:20.921005 | orchestrator | Thursday 09 April 2026 03:22:51 +0000 (0:00:00.365) 0:02:44.715 ******** 2026-04-09 03:23:20.921009 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:23:20.921014 | orchestrator | 2026-04-09 03:23:20.921018 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-04-09 03:23:20.921022 | orchestrator | Thursday 09 April 2026 03:22:52 +0000 (0:00:00.843) 0:02:45.559 ******** 2026-04-09 03:23:20.921026 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-09 03:23:20.921029 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-04-09 03:23:20.921033 | orchestrator | 2026-04-09 03:23:20.921037 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-04-09 03:23:20.921041 | orchestrator | Thursday 09 April 2026 03:22:55 +0000 (0:00:03.024) 0:02:48.584 ******** 2026-04-09 03:23:20.921044 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-09 03:23:20.921050 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-09 03:23:20.921054 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-09 03:23:20.921058 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-09 03:23:20.921062 | orchestrator | 2026-04-09 03:23:20.921069 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-09 03:23:20.921075 | orchestrator | Thursday 09 April 2026 03:23:01 +0000 (0:00:06.363) 0:02:54.947 ******** 2026-04-09 03:23:20.921081 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 03:23:20.921087 | orchestrator | 2026-04-09 03:23:20.921093 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-09 03:23:20.921098 | orchestrator | Thursday 09 April 2026 03:23:04 +0000 (0:00:03.289) 0:02:58.237 ******** 2026-04-09 03:23:20.921104 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 03:23:20.921110 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-09 03:23:20.921115 | orchestrator | 2026-04-09 03:23:20.921121 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-09 03:23:20.921127 | orchestrator | Thursday 09 April 2026 03:23:08 +0000 (0:00:03.871) 0:03:02.108 ******** 2026-04-09 03:23:20.921133 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 03:23:20.921139 | orchestrator | 2026-04-09 03:23:20.921144 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-04-09 03:23:20.921150 | orchestrator | Thursday 09 April 2026 03:23:12 +0000 (0:00:03.458) 0:03:05.567 ******** 2026-04-09 03:23:20.921156 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-09 03:23:20.921162 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-04-09 03:23:20.921167 | orchestrator | 2026-04-09 03:23:20.921173 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-09 03:23:20.921192 | orchestrator | Thursday 09 April 2026 03:23:19 +0000 (0:00:07.300) 0:03:12.867 ******** 2026-04-09 03:23:20.921234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 03:23:20.921253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 03:23:20.921258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 03:23:20.921271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:23:25.705897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:23:25.705978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:23:25.705985 | orchestrator | 2026-04-09 03:23:25.705992 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-09 03:23:25.705999 | orchestrator | Thursday 09 April 2026 03:23:20 +0000 (0:00:01.371) 0:03:14.239 ******** 2026-04-09 03:23:25.706004 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:23:25.706009 | orchestrator | 2026-04-09 03:23:25.706014 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-09 03:23:25.706052 | orchestrator | Thursday 09 April 2026 03:23:21 +0000 (0:00:00.156) 0:03:14.396 ******** 2026-04-09 03:23:25.706060 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:23:25.706068 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:23:25.706076 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:23:25.706084 | orchestrator | 2026-04-09 03:23:25.706091 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-09 03:23:25.706099 | orchestrator | Thursday 09 April 2026 03:23:21 +0000 (0:00:00.320) 0:03:14.717 ******** 2026-04-09 03:23:25.706107 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 03:23:25.706115 | orchestrator | 2026-04-09 03:23:25.706122 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-09 03:23:25.706130 | orchestrator | Thursday 09 April 2026 03:23:22 +0000 (0:00:00.736) 0:03:15.453 ******** 2026-04-09 03:23:25.706137 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:23:25.706145 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:23:25.706153 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:23:25.706161 | orchestrator | 2026-04-09 03:23:25.706168 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-09 03:23:25.706177 | orchestrator | Thursday 09 April 2026 03:23:22 +0000 (0:00:00.564) 0:03:16.018 ******** 2026-04-09 03:23:25.706187 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:23:25.706197 | orchestrator | 2026-04-09 03:23:25.706205 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-09 03:23:25.706214 | orchestrator | Thursday 09 April 2026 03:23:23 +0000 (0:00:00.637) 0:03:16.655 ******** 2026-04-09 03:23:25.706237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 03:23:25.706278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 03:23:25.706284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 03:23:25.706290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:23:25.706295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:23:25.706333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:23:25.706347 | orchestrator | 2026-04-09 03:23:25.706356 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-09 03:23:27.532631 | orchestrator | Thursday 09 April 2026 03:23:25 +0000 (0:00:02.370) 0:03:19.026 ******** 2026-04-09 03:23:27.532746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-09 03:23:27.532768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:23:27.532782 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:23:27.532796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-09 03:23:27.532846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:23:27.532858 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:23:27.532890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-09 03:23:27.532903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:23:27.532915 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:23:27.532926 | orchestrator | 2026-04-09 03:23:27.532938 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-09 03:23:27.532949 | orchestrator | Thursday 09 April 2026 03:23:26 +0000 (0:00:00.962) 0:03:19.989 ******** 2026-04-09 03:23:27.532961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-09 03:23:27.532982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:23:27.532993 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:23:27.533019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-09 03:23:30.008454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:23:30.008547 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:23:30.008564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-09 03:23:30.008596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:23:30.008606 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:23:30.008615 | orchestrator | 2026-04-09 03:23:30.008625 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-09 03:23:30.008635 | orchestrator | Thursday 09 April 2026 03:23:27 +0000 (0:00:00.863) 0:03:20.853 ******** 2026-04-09 03:23:30.008657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 03:23:30.008684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 03:23:30.008721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 03:23:30.008749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:23:30.008770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:23:30.008792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:23:36.956872 | orchestrator | 2026-04-09 03:23:36.956958 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-09 03:23:36.956970 | orchestrator | Thursday 09 April 2026 03:23:29 +0000 (0:00:02.473) 0:03:23.326 ******** 2026-04-09 03:23:36.956981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 03:23:36.957011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 03:23:36.957062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 03:23:36.957089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:23:36.957098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:23:36.957112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:23:36.957116 | orchestrator | 2026-04-09 03:23:36.957120 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-09 03:23:36.957126 | orchestrator | Thursday 09 April 2026 03:23:36 +0000 (0:00:06.327) 0:03:29.654 ******** 2026-04-09 03:23:36.957137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-09 03:23:36.957144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:23:36.957150 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:23:36.957165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-09 03:23:41.476752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:23:41.476848 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:23:41.476865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-09 03:23:41.476892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:23:41.476902 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:23:41.476912 | orchestrator | 2026-04-09 03:23:41.476922 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-09 03:23:41.476932 | orchestrator | Thursday 09 April 2026 03:23:36 +0000 (0:00:00.625) 0:03:30.279 ******** 2026-04-09 03:23:41.476941 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:23:41.476950 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:23:41.476958 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:23:41.476967 | orchestrator | 2026-04-09 03:23:41.476975 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-04-09 03:23:41.476984 | orchestrator | Thursday 09 April 2026 03:23:38 +0000 (0:00:01.587) 0:03:31.867 ******** 2026-04-09 03:23:41.476992 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:23:41.477001 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:23:41.477010 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:23:41.477018 | orchestrator | 2026-04-09 03:23:41.477026 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-04-09 03:23:41.477035 | orchestrator | Thursday 09 April 2026 03:23:38 +0000 (0:00:00.360) 0:03:32.227 ******** 2026-04-09 03:23:41.477060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 03:23:41.477093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 03:23:41.477117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 03:23:41.477134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:23:41.477159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:23:41.477185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:24:20.901967 | orchestrator | 2026-04-09 03:24:20.902093 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-09 03:24:20.902106 | orchestrator | Thursday 09 April 2026 03:23:40 +0000 (0:00:02.074) 0:03:34.302 ******** 2026-04-09 03:24:20.902112 | orchestrator | 2026-04-09 03:24:20.902118 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-09 03:24:20.902123 | orchestrator | Thursday 09 April 2026 03:23:41 +0000 (0:00:00.158) 0:03:34.461 ******** 2026-04-09 03:24:20.902128 | orchestrator | 2026-04-09 03:24:20.902134 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-09 03:24:20.902139 | orchestrator | Thursday 09 April 2026 03:23:41 +0000 (0:00:00.156) 0:03:34.617 ******** 2026-04-09 03:24:20.902144 | orchestrator | 2026-04-09 03:24:20.902149 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-09 03:24:20.902154 | orchestrator | Thursday 09 April 2026 03:23:41 +0000 (0:00:00.170) 0:03:34.788 ******** 2026-04-09 03:24:20.902160 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:24:20.902166 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:24:20.902171 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:24:20.902176 | orchestrator | 2026-04-09 03:24:20.902181 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-09 03:24:20.902186 | orchestrator | Thursday 09 April 2026 03:24:01 +0000 (0:00:19.835) 0:03:54.623 ******** 2026-04-09 03:24:20.902192 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:24:20.902197 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:24:20.902202 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:24:20.902207 | orchestrator | 2026-04-09 03:24:20.902212 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-09 03:24:20.902217 | orchestrator | 2026-04-09 03:24:20.902222 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-09 03:24:20.902227 | orchestrator | Thursday 09 April 2026 03:24:07 +0000 (0:00:06.011) 0:04:00.635 ******** 2026-04-09 03:24:20.902233 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:24:20.902239 | orchestrator | 2026-04-09 03:24:20.902244 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-09 03:24:20.902294 | orchestrator | Thursday 09 April 2026 03:24:08 +0000 (0:00:01.354) 0:04:01.989 ******** 2026-04-09 03:24:20.902301 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:24:20.902306 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:24:20.902327 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:24:20.902333 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:24:20.902338 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:24:20.902343 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:24:20.902348 | orchestrator | 2026-04-09 03:24:20.902354 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-09 03:24:20.902359 | orchestrator | Thursday 09 April 2026 03:24:09 +0000 (0:00:00.844) 0:04:02.833 ******** 2026-04-09 03:24:20.902364 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:24:20.902369 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:24:20.902374 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:24:20.902379 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:24:20.902384 | orchestrator | 2026-04-09 03:24:20.902390 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-09 03:24:20.902395 | orchestrator | Thursday 09 April 2026 03:24:10 +0000 (0:00:00.950) 0:04:03.783 ******** 2026-04-09 03:24:20.902400 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-09 03:24:20.902406 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-09 03:24:20.902411 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-09 03:24:20.902416 | orchestrator | 2026-04-09 03:24:20.902421 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-09 03:24:20.902426 | orchestrator | Thursday 09 April 2026 03:24:11 +0000 (0:00:00.999) 0:04:04.782 ******** 2026-04-09 03:24:20.902431 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-04-09 03:24:20.902436 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-04-09 03:24:20.902441 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-04-09 03:24:20.902446 | orchestrator | 2026-04-09 03:24:20.902451 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-09 03:24:20.902456 | orchestrator | Thursday 09 April 2026 03:24:12 +0000 (0:00:01.217) 0:04:06.000 ******** 2026-04-09 03:24:20.902461 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-09 03:24:20.902466 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:24:20.902484 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-09 03:24:20.902490 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:24:20.902495 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-09 03:24:20.902500 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:24:20.902505 | orchestrator | 2026-04-09 03:24:20.902510 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-09 03:24:20.902515 | orchestrator | Thursday 09 April 2026 03:24:13 +0000 (0:00:00.608) 0:04:06.609 ******** 2026-04-09 03:24:20.902520 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-09 03:24:20.902525 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 03:24:20.902530 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 03:24:20.902535 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:24:20.902541 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 03:24:20.902547 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 03:24:20.902553 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:24:20.902559 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 03:24:20.902577 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 03:24:20.902590 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:24:20.902596 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-09 03:24:20.902602 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-09 03:24:20.902609 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-09 03:24:20.902620 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-09 03:24:20.902626 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-09 03:24:20.902631 | orchestrator | 2026-04-09 03:24:20.902638 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-09 03:24:20.902643 | orchestrator | Thursday 09 April 2026 03:24:15 +0000 (0:00:02.495) 0:04:09.104 ******** 2026-04-09 03:24:20.902649 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:24:20.902662 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:24:20.902669 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:24:20.902674 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:24:20.902680 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:24:20.902686 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:24:20.902691 | orchestrator | 2026-04-09 03:24:20.902698 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-09 03:24:20.902704 | orchestrator | Thursday 09 April 2026 03:24:16 +0000 (0:00:01.212) 0:04:10.316 ******** 2026-04-09 03:24:20.902709 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:24:20.902715 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:24:20.902721 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:24:20.902727 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:24:20.902733 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:24:20.902739 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:24:20.902745 | orchestrator | 2026-04-09 03:24:20.902754 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-09 03:24:20.902762 | orchestrator | Thursday 09 April 2026 03:24:18 +0000 (0:00:01.958) 0:04:12.275 ******** 2026-04-09 03:24:20.902780 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 03:24:20.902797 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 03:24:20.902814 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 03:24:22.727906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 03:24:22.728015 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 03:24:22.728048 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 03:24:22.728061 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 03:24:22.728071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 03:24:22.728082 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 03:24:22.728159 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 03:24:22.728172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 03:24:22.728188 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 03:24:22.728220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 03:24:22.728236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 03:24:22.728324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 03:24:22.728391 | orchestrator | 2026-04-09 03:24:22.728413 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-09 03:24:22.728445 | orchestrator | Thursday 09 April 2026 03:24:21 +0000 (0:00:02.379) 0:04:14.654 ******** 2026-04-09 03:24:22.728465 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:24:22.728481 | orchestrator | 2026-04-09 03:24:22.728499 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-09 03:24:22.728548 | orchestrator | Thursday 09 April 2026 03:24:22 +0000 (0:00:01.391) 0:04:16.047 ******** 2026-04-09 03:24:26.403056 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 03:24:26.403249 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 03:24:26.403372 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 03:24:26.403386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 03:24:26.403424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 03:24:26.403458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 03:24:26.403470 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 03:24:26.403490 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 03:24:26.403502 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 03:24:26.403513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 03:24:26.403532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 03:24:26.403544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 03:24:26.403564 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 03:24:28.033945 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 03:24:28.034975 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 03:24:28.035050 | orchestrator | 2026-04-09 03:24:28.035065 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-09 03:24:28.035078 | orchestrator | Thursday 09 April 2026 03:24:26 +0000 (0:00:03.891) 0:04:19.938 ******** 2026-04-09 03:24:28.035091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 03:24:28.035130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 03:24:28.035143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 03:24:28.035155 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:24:28.035224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 03:24:28.035249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 03:24:28.035293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 03:24:28.035325 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:24:28.035346 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 03:24:28.035366 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 03:24:28.035399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 03:24:30.143564 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:24:30.143681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 03:24:30.143701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 03:24:30.143744 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:24:30.143755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 03:24:30.143767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 03:24:30.143777 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:24:30.143787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 03:24:30.143797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 03:24:30.143807 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:24:30.143817 | orchestrator | 2026-04-09 03:24:30.143828 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-09 03:24:30.143839 | orchestrator | Thursday 09 April 2026 03:24:28 +0000 (0:00:01.701) 0:04:21.639 ******** 2026-04-09 03:24:30.143886 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 03:24:30.143915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 03:24:30.143935 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 03:24:30.143951 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:24:30.143967 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 03:24:30.143984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 03:24:30.144022 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 03:24:38.264342 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:24:38.264483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 03:24:38.264505 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 03:24:38.264519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 03:24:38.264531 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:24:38.264543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 03:24:38.264555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 03:24:38.264566 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:24:38.264610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 03:24:38.264632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 03:24:38.264644 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:24:38.264655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 03:24:38.264667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 03:24:38.264678 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:24:38.264689 | orchestrator | 2026-04-09 03:24:38.264701 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-09 03:24:38.264713 | orchestrator | Thursday 09 April 2026 03:24:30 +0000 (0:00:02.556) 0:04:24.196 ******** 2026-04-09 03:24:38.264724 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:24:38.264735 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:24:38.264746 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:24:38.264757 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:24:38.264769 | orchestrator | 2026-04-09 03:24:38.264779 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-09 03:24:38.264790 | orchestrator | Thursday 09 April 2026 03:24:31 +0000 (0:00:01.008) 0:04:25.205 ******** 2026-04-09 03:24:38.264801 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 03:24:38.264812 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 03:24:38.264827 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 03:24:38.264847 | orchestrator | 2026-04-09 03:24:38.264865 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-09 03:24:38.264882 | orchestrator | Thursday 09 April 2026 03:24:33 +0000 (0:00:01.318) 0:04:26.524 ******** 2026-04-09 03:24:38.264899 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 03:24:38.264916 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 03:24:38.264934 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 03:24:38.264951 | orchestrator | 2026-04-09 03:24:38.264969 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-09 03:24:38.264984 | orchestrator | Thursday 09 April 2026 03:24:34 +0000 (0:00:01.057) 0:04:27.581 ******** 2026-04-09 03:24:38.265013 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:24:38.265029 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:24:38.265046 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:24:38.265063 | orchestrator | 2026-04-09 03:24:38.265080 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-09 03:24:38.265099 | orchestrator | Thursday 09 April 2026 03:24:34 +0000 (0:00:00.575) 0:04:28.157 ******** 2026-04-09 03:24:38.265118 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:24:38.265138 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:24:38.265156 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:24:38.265175 | orchestrator | 2026-04-09 03:24:38.265192 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-09 03:24:38.265210 | orchestrator | Thursday 09 April 2026 03:24:35 +0000 (0:00:00.573) 0:04:28.730 ******** 2026-04-09 03:24:38.265228 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-09 03:24:38.265278 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-09 03:24:38.265296 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-09 03:24:38.265314 | orchestrator | 2026-04-09 03:24:38.265331 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-09 03:24:38.265351 | orchestrator | Thursday 09 April 2026 03:24:36 +0000 (0:00:01.480) 0:04:30.210 ******** 2026-04-09 03:24:38.265397 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-09 03:24:57.971568 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-09 03:24:57.971684 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-09 03:24:57.971697 | orchestrator | 2026-04-09 03:24:57.971709 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-09 03:24:57.971721 | orchestrator | Thursday 09 April 2026 03:24:38 +0000 (0:00:01.374) 0:04:31.585 ******** 2026-04-09 03:24:57.971731 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-09 03:24:57.971741 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-09 03:24:57.971751 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-09 03:24:57.971761 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-04-09 03:24:57.971770 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-04-09 03:24:57.971779 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-04-09 03:24:57.971789 | orchestrator | 2026-04-09 03:24:57.971799 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-09 03:24:57.971809 | orchestrator | Thursday 09 April 2026 03:24:42 +0000 (0:00:04.265) 0:04:35.850 ******** 2026-04-09 03:24:57.971818 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:24:57.971829 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:24:57.971838 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:24:57.971848 | orchestrator | 2026-04-09 03:24:57.971858 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-09 03:24:57.971867 | orchestrator | Thursday 09 April 2026 03:24:42 +0000 (0:00:00.315) 0:04:36.165 ******** 2026-04-09 03:24:57.971877 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:24:57.971886 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:24:57.971896 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:24:57.971905 | orchestrator | 2026-04-09 03:24:57.971916 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-09 03:24:57.971926 | orchestrator | Thursday 09 April 2026 03:24:43 +0000 (0:00:00.589) 0:04:36.755 ******** 2026-04-09 03:24:57.971935 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:24:57.971945 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:24:57.971954 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:24:57.971964 | orchestrator | 2026-04-09 03:24:57.971974 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-09 03:24:57.971983 | orchestrator | Thursday 09 April 2026 03:24:44 +0000 (0:00:01.305) 0:04:38.060 ******** 2026-04-09 03:24:57.972018 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-09 03:24:57.972031 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-09 03:24:57.972040 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-09 03:24:57.972050 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-09 03:24:57.972060 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-09 03:24:57.972070 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-09 03:24:57.972079 | orchestrator | 2026-04-09 03:24:57.972089 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-09 03:24:57.972101 | orchestrator | Thursday 09 April 2026 03:24:48 +0000 (0:00:03.426) 0:04:41.486 ******** 2026-04-09 03:24:57.972113 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 03:24:57.972124 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 03:24:57.972136 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 03:24:57.972147 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 03:24:57.972158 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:24:57.972170 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 03:24:57.972181 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:24:57.972192 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 03:24:57.972203 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:24:57.972215 | orchestrator | 2026-04-09 03:24:57.972282 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-09 03:24:57.972296 | orchestrator | Thursday 09 April 2026 03:24:51 +0000 (0:00:03.561) 0:04:45.048 ******** 2026-04-09 03:24:57.972308 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:24:57.972319 | orchestrator | 2026-04-09 03:24:57.972330 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-09 03:24:57.972342 | orchestrator | Thursday 09 April 2026 03:24:51 +0000 (0:00:00.138) 0:04:45.186 ******** 2026-04-09 03:24:57.972352 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:24:57.972362 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:24:57.972371 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:24:57.972381 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:24:57.972391 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:24:57.972400 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:24:57.972409 | orchestrator | 2026-04-09 03:24:57.972419 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-09 03:24:57.972429 | orchestrator | Thursday 09 April 2026 03:24:52 +0000 (0:00:00.903) 0:04:46.089 ******** 2026-04-09 03:24:57.972439 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 03:24:57.972448 | orchestrator | 2026-04-09 03:24:57.972458 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-09 03:24:57.972482 | orchestrator | Thursday 09 April 2026 03:24:53 +0000 (0:00:00.772) 0:04:46.862 ******** 2026-04-09 03:24:57.972492 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:24:57.972519 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:24:57.972529 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:24:57.972539 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:24:57.972548 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:24:57.972558 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:24:57.972567 | orchestrator | 2026-04-09 03:24:57.972577 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-09 03:24:57.972586 | orchestrator | Thursday 09 April 2026 03:24:54 +0000 (0:00:00.889) 0:04:47.751 ******** 2026-04-09 03:24:57.972609 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 03:24:57.972624 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 03:24:57.972634 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 03:24:57.972645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 03:24:57.972669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 03:25:05.121150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 03:25:05.121297 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 03:25:05.121312 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 03:25:05.121318 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 03:25:05.121324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 03:25:05.121330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 03:25:05.121362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 03:25:05.121388 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 03:25:05.121396 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 03:25:05.121402 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 03:25:05.121408 | orchestrator | 2026-04-09 03:25:05.121415 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-09 03:25:05.121422 | orchestrator | Thursday 09 April 2026 03:24:58 +0000 (0:00:03.692) 0:04:51.444 ******** 2026-04-09 03:25:05.121428 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 03:25:05.121439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 03:25:05.121456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 03:25:05.503742 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 03:25:05.503817 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 03:25:05.503824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 03:25:05.503829 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 03:25:05.503860 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 03:25:05.503874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 03:25:05.503879 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 03:25:05.503883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 03:25:05.503888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 03:25:05.503892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 03:25:05.503903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 03:25:05.503907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 03:25:05.503911 | orchestrator | 2026-04-09 03:25:05.503916 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-09 03:25:05.503924 | orchestrator | Thursday 09 April 2026 03:25:05 +0000 (0:00:07.381) 0:04:58.825 ******** 2026-04-09 03:25:28.324944 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:25:28.325075 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:25:28.325096 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:25:28.325110 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:25:28.325124 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:25:28.325139 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:25:28.325153 | orchestrator | 2026-04-09 03:25:28.325168 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-09 03:25:28.325184 | orchestrator | Thursday 09 April 2026 03:25:07 +0000 (0:00:01.641) 0:05:00.467 ******** 2026-04-09 03:25:28.325198 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-09 03:25:28.325268 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-09 03:25:28.325283 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-09 03:25:28.325297 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-09 03:25:28.325312 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-09 03:25:28.325327 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-09 03:25:28.325341 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-09 03:25:28.325355 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:25:28.325370 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-09 03:25:28.325384 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:25:28.325398 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-09 03:25:28.325413 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:25:28.325427 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-09 03:25:28.325442 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-09 03:25:28.325486 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-09 03:25:28.325503 | orchestrator | 2026-04-09 03:25:28.325521 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-09 03:25:28.325538 | orchestrator | Thursday 09 April 2026 03:25:11 +0000 (0:00:04.125) 0:05:04.592 ******** 2026-04-09 03:25:28.325554 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:25:28.325568 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:25:28.325582 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:25:28.325596 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:25:28.325610 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:25:28.325624 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:25:28.325638 | orchestrator | 2026-04-09 03:25:28.325652 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-09 03:25:28.325666 | orchestrator | Thursday 09 April 2026 03:25:12 +0000 (0:00:00.943) 0:05:05.535 ******** 2026-04-09 03:25:28.325681 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-09 03:25:28.325696 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-09 03:25:28.325710 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-09 03:25:28.325723 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-09 03:25:28.325738 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-09 03:25:28.325771 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-09 03:25:28.325786 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-09 03:25:28.325800 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-09 03:25:28.325814 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-09 03:25:28.325828 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-09 03:25:28.325842 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:25:28.325857 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-09 03:25:28.325871 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:25:28.325885 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-09 03:25:28.325900 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-09 03:25:28.325913 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:25:28.325927 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-09 03:25:28.325965 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-09 03:25:28.325979 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-09 03:25:28.325993 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-09 03:25:28.326007 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-09 03:25:28.326109 | orchestrator | 2026-04-09 03:25:28.326126 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-09 03:25:28.326142 | orchestrator | Thursday 09 April 2026 03:25:18 +0000 (0:00:05.810) 0:05:11.346 ******** 2026-04-09 03:25:28.326172 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 03:25:28.326187 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 03:25:28.326277 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 03:25:28.326296 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-09 03:25:28.326310 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-09 03:25:28.326324 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 03:25:28.326339 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 03:25:28.326352 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-09 03:25:28.326365 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 03:25:28.326376 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 03:25:28.326388 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 03:25:28.326401 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-09 03:25:28.326413 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:25:28.326427 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 03:25:28.326440 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-09 03:25:28.326453 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:25:28.326465 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-09 03:25:28.326478 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:25:28.326490 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 03:25:28.326502 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 03:25:28.326515 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 03:25:28.326527 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 03:25:28.326539 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 03:25:28.326552 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 03:25:28.326564 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 03:25:28.326576 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 03:25:28.326589 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 03:25:28.326601 | orchestrator | 2026-04-09 03:25:28.326624 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-09 03:25:28.326637 | orchestrator | Thursday 09 April 2026 03:25:24 +0000 (0:00:06.893) 0:05:18.240 ******** 2026-04-09 03:25:28.326649 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:25:28.326661 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:25:28.326674 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:25:28.326686 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:25:28.326699 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:25:28.326713 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:25:28.326726 | orchestrator | 2026-04-09 03:25:28.326740 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-09 03:25:28.326753 | orchestrator | Thursday 09 April 2026 03:25:25 +0000 (0:00:00.681) 0:05:18.922 ******** 2026-04-09 03:25:28.326780 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:25:28.326794 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:25:28.326808 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:25:28.326822 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:25:28.326836 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:25:28.326850 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:25:28.326864 | orchestrator | 2026-04-09 03:25:28.326878 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-09 03:25:28.326892 | orchestrator | Thursday 09 April 2026 03:25:26 +0000 (0:00:00.589) 0:05:19.511 ******** 2026-04-09 03:25:28.326907 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:25:28.326921 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:25:28.326935 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:25:28.326949 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:25:28.326963 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:25:28.326977 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:25:28.326991 | orchestrator | 2026-04-09 03:25:28.327023 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-09 03:25:29.576104 | orchestrator | Thursday 09 April 2026 03:25:28 +0000 (0:00:02.121) 0:05:21.633 ******** 2026-04-09 03:25:29.576200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 03:25:29.576264 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 03:25:29.576284 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 03:25:29.576302 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:25:29.576333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 03:25:29.576363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 03:25:29.576390 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 03:25:29.576400 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:25:29.576409 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 03:25:29.576418 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 03:25:29.576427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 03:25:29.576448 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:25:29.576483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 03:25:29.576501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 03:25:33.420098 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:25:33.420298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 03:25:33.420321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 03:25:33.420335 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:25:33.420347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 03:25:33.420359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 03:25:33.420408 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:25:33.420421 | orchestrator | 2026-04-09 03:25:33.420433 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-09 03:25:33.420446 | orchestrator | Thursday 09 April 2026 03:25:29 +0000 (0:00:01.500) 0:05:23.133 ******** 2026-04-09 03:25:33.420457 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-09 03:25:33.420468 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-09 03:25:33.420495 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:25:33.420507 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-09 03:25:33.420518 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-09 03:25:33.420528 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:25:33.420539 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-09 03:25:33.420550 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-09 03:25:33.420561 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:25:33.420572 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-09 03:25:33.420583 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-09 03:25:33.420594 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:25:33.420606 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-09 03:25:33.420619 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-09 03:25:33.420632 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:25:33.420645 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-09 03:25:33.420658 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-09 03:25:33.420670 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:25:33.420684 | orchestrator | 2026-04-09 03:25:33.420696 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-04-09 03:25:33.420708 | orchestrator | Thursday 09 April 2026 03:25:30 +0000 (0:00:01.061) 0:05:24.195 ******** 2026-04-09 03:25:33.420741 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 03:25:33.420758 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 03:25:33.420788 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 03:25:33.420807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 03:25:33.420819 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 03:25:33.420840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 03:26:36.116675 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 03:26:36.116775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 03:26:36.116812 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 03:26:36.116824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 03:26:36.116849 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 03:26:36.116861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 03:26:36.116892 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 03:26:36.116912 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 03:26:36.116944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 03:26:36.116960 | orchestrator | 2026-04-09 03:26:36.116976 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-09 03:26:36.116992 | orchestrator | Thursday 09 April 2026 03:25:33 +0000 (0:00:02.764) 0:05:26.960 ******** 2026-04-09 03:26:36.117006 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:26:36.117022 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:26:36.117037 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:26:36.117052 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:26:36.117067 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:26:36.117081 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:26:36.117095 | orchestrator | 2026-04-09 03:26:36.117109 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 03:26:36.117123 | orchestrator | Thursday 09 April 2026 03:25:34 +0000 (0:00:00.898) 0:05:27.859 ******** 2026-04-09 03:26:36.117138 | orchestrator | 2026-04-09 03:26:36.117226 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 03:26:36.117239 | orchestrator | Thursday 09 April 2026 03:25:34 +0000 (0:00:00.163) 0:05:28.022 ******** 2026-04-09 03:26:36.117246 | orchestrator | 2026-04-09 03:26:36.117255 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 03:26:36.117271 | orchestrator | Thursday 09 April 2026 03:25:34 +0000 (0:00:00.160) 0:05:28.182 ******** 2026-04-09 03:26:36.117279 | orchestrator | 2026-04-09 03:26:36.117287 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 03:26:36.117294 | orchestrator | Thursday 09 April 2026 03:25:35 +0000 (0:00:00.149) 0:05:28.332 ******** 2026-04-09 03:26:36.117302 | orchestrator | 2026-04-09 03:26:36.117310 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 03:26:36.117318 | orchestrator | Thursday 09 April 2026 03:25:35 +0000 (0:00:00.155) 0:05:28.487 ******** 2026-04-09 03:26:36.117325 | orchestrator | 2026-04-09 03:26:36.117333 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 03:26:36.117341 | orchestrator | Thursday 09 April 2026 03:25:35 +0000 (0:00:00.328) 0:05:28.816 ******** 2026-04-09 03:26:36.117349 | orchestrator | 2026-04-09 03:26:36.117356 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-09 03:26:36.117364 | orchestrator | Thursday 09 April 2026 03:25:35 +0000 (0:00:00.144) 0:05:28.960 ******** 2026-04-09 03:26:36.117372 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:26:36.117380 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:26:36.117388 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:26:36.117396 | orchestrator | 2026-04-09 03:26:36.117403 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-09 03:26:36.117411 | orchestrator | Thursday 09 April 2026 03:25:47 +0000 (0:00:12.264) 0:05:41.225 ******** 2026-04-09 03:26:36.117419 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:26:36.117427 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:26:36.117435 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:26:36.117442 | orchestrator | 2026-04-09 03:26:36.117450 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-09 03:26:36.117466 | orchestrator | Thursday 09 April 2026 03:26:08 +0000 (0:00:20.132) 0:06:01.358 ******** 2026-04-09 03:26:36.117474 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:26:36.117482 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:26:36.117490 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:26:36.117497 | orchestrator | 2026-04-09 03:26:36.117515 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-09 03:29:02.679254 | orchestrator | Thursday 09 April 2026 03:26:36 +0000 (0:00:28.067) 0:06:29.425 ******** 2026-04-09 03:29:02.679348 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:29:02.679358 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:29:02.679363 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:29:02.679369 | orchestrator | 2026-04-09 03:29:02.679376 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-09 03:29:02.679382 | orchestrator | Thursday 09 April 2026 03:27:17 +0000 (0:00:41.505) 0:07:10.931 ******** 2026-04-09 03:29:02.679387 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:29:02.679393 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:29:02.679399 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:29:02.679404 | orchestrator | 2026-04-09 03:29:02.679410 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-09 03:29:02.679415 | orchestrator | Thursday 09 April 2026 03:27:18 +0000 (0:00:00.781) 0:07:11.712 ******** 2026-04-09 03:29:02.679421 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:29:02.679428 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:29:02.679437 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:29:02.679445 | orchestrator | 2026-04-09 03:29:02.679458 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-09 03:29:02.679468 | orchestrator | Thursday 09 April 2026 03:27:19 +0000 (0:00:00.781) 0:07:12.493 ******** 2026-04-09 03:29:02.679477 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:29:02.679486 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:29:02.679494 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:29:02.679503 | orchestrator | 2026-04-09 03:29:02.679512 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-09 03:29:02.679522 | orchestrator | Thursday 09 April 2026 03:27:50 +0000 (0:00:31.630) 0:07:44.124 ******** 2026-04-09 03:29:02.679531 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:29:02.679539 | orchestrator | 2026-04-09 03:29:02.679547 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-09 03:29:02.679555 | orchestrator | Thursday 09 April 2026 03:27:50 +0000 (0:00:00.143) 0:07:44.267 ******** 2026-04-09 03:29:02.679564 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:29:02.679573 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:29:02.679581 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:29:02.679590 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:02.679600 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:29:02.679610 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-04-09 03:29:02.679621 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-09 03:29:02.679631 | orchestrator | 2026-04-09 03:29:02.679641 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-09 03:29:02.679650 | orchestrator | Thursday 09 April 2026 03:28:13 +0000 (0:00:22.473) 0:08:06.740 ******** 2026-04-09 03:29:02.679659 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:29:02.679668 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:29:02.679677 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:29:02.679687 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:02.679695 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:29:02.679705 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:29:02.679714 | orchestrator | 2026-04-09 03:29:02.679723 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-09 03:29:02.679758 | orchestrator | Thursday 09 April 2026 03:28:23 +0000 (0:00:10.305) 0:08:17.046 ******** 2026-04-09 03:29:02.679767 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:29:02.679773 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:29:02.679778 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:02.679783 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:29:02.679790 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:29:02.679798 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-04-09 03:29:02.679804 | orchestrator | 2026-04-09 03:29:02.679823 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-09 03:29:02.679830 | orchestrator | Thursday 09 April 2026 03:28:27 +0000 (0:00:04.241) 0:08:21.288 ******** 2026-04-09 03:29:02.679836 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-09 03:29:02.679843 | orchestrator | 2026-04-09 03:29:02.679849 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-09 03:29:02.679855 | orchestrator | Thursday 09 April 2026 03:28:41 +0000 (0:00:13.607) 0:08:34.896 ******** 2026-04-09 03:29:02.679861 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-09 03:29:02.679868 | orchestrator | 2026-04-09 03:29:02.679874 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-09 03:29:02.679880 | orchestrator | Thursday 09 April 2026 03:28:43 +0000 (0:00:01.807) 0:08:36.703 ******** 2026-04-09 03:29:02.679887 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:29:02.679893 | orchestrator | 2026-04-09 03:29:02.679899 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-09 03:29:02.679906 | orchestrator | Thursday 09 April 2026 03:28:45 +0000 (0:00:01.911) 0:08:38.615 ******** 2026-04-09 03:29:02.679912 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-09 03:29:02.679919 | orchestrator | 2026-04-09 03:29:02.679925 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-04-09 03:29:02.679931 | orchestrator | Thursday 09 April 2026 03:28:56 +0000 (0:00:11.296) 0:08:49.911 ******** 2026-04-09 03:29:02.679937 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:29:02.679945 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:29:02.679951 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:29:02.679957 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:29:02.679963 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:29:02.679970 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:29:02.679976 | orchestrator | 2026-04-09 03:29:02.679983 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-09 03:29:02.679989 | orchestrator | 2026-04-09 03:29:02.679995 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-09 03:29:02.680017 | orchestrator | Thursday 09 April 2026 03:28:58 +0000 (0:00:01.893) 0:08:51.805 ******** 2026-04-09 03:29:02.680024 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:29:02.680052 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:29:02.680059 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:29:02.680066 | orchestrator | 2026-04-09 03:29:02.680072 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-09 03:29:02.680078 | orchestrator | 2026-04-09 03:29:02.680084 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-09 03:29:02.680090 | orchestrator | Thursday 09 April 2026 03:28:59 +0000 (0:00:01.110) 0:08:52.916 ******** 2026-04-09 03:29:02.680097 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:02.680103 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:29:02.680109 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:29:02.680115 | orchestrator | 2026-04-09 03:29:02.680122 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-09 03:29:02.680128 | orchestrator | 2026-04-09 03:29:02.680134 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-09 03:29:02.680141 | orchestrator | Thursday 09 April 2026 03:29:00 +0000 (0:00:00.957) 0:08:53.874 ******** 2026-04-09 03:29:02.680153 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-09 03:29:02.680160 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-09 03:29:02.680167 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-09 03:29:02.680172 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-09 03:29:02.680177 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-09 03:29:02.680183 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-09 03:29:02.680188 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:29:02.680194 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-09 03:29:02.680199 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-09 03:29:02.680205 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-09 03:29:02.680210 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-09 03:29:02.680215 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-09 03:29:02.680221 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-09 03:29:02.680226 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:29:02.680231 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-09 03:29:02.680237 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-09 03:29:02.680242 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-09 03:29:02.680247 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-09 03:29:02.680253 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-09 03:29:02.680258 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-09 03:29:02.680263 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:29:02.680269 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-09 03:29:02.680274 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-09 03:29:02.680279 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-09 03:29:02.680285 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-09 03:29:02.680290 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-09 03:29:02.680295 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-09 03:29:02.680301 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:02.680306 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-09 03:29:02.680311 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-09 03:29:02.680321 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-09 03:29:02.680326 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-09 03:29:02.680332 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-09 03:29:02.680337 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-09 03:29:02.680342 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:29:02.680348 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-09 03:29:02.680353 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-09 03:29:02.680358 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-09 03:29:02.680364 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-09 03:29:02.680369 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-09 03:29:02.680375 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-09 03:29:02.680380 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:29:02.680385 | orchestrator | 2026-04-09 03:29:02.680391 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-09 03:29:02.680396 | orchestrator | 2026-04-09 03:29:02.680401 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-09 03:29:02.680412 | orchestrator | Thursday 09 April 2026 03:29:02 +0000 (0:00:01.485) 0:08:55.360 ******** 2026-04-09 03:29:02.680417 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-09 03:29:02.680423 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-09 03:29:02.680428 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:02.680434 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-09 03:29:02.680439 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-09 03:29:02.680445 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:29:02.680450 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-09 03:29:02.680455 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-09 03:29:02.680461 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:29:02.680466 | orchestrator | 2026-04-09 03:29:02.680475 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-09 03:29:04.614172 | orchestrator | 2026-04-09 03:29:04.614302 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-09 03:29:04.614323 | orchestrator | Thursday 09 April 2026 03:29:02 +0000 (0:00:00.634) 0:08:55.994 ******** 2026-04-09 03:29:04.614340 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:04.614357 | orchestrator | 2026-04-09 03:29:04.614388 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-09 03:29:04.614414 | orchestrator | 2026-04-09 03:29:04.614429 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-09 03:29:04.614444 | orchestrator | Thursday 09 April 2026 03:29:03 +0000 (0:00:00.965) 0:08:56.960 ******** 2026-04-09 03:29:04.614458 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:04.614473 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:29:04.614488 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:29:04.614502 | orchestrator | 2026-04-09 03:29:04.614516 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 03:29:04.614531 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 03:29:04.614548 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-04-09 03:29:04.614563 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-04-09 03:29:04.614577 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-04-09 03:29:04.614592 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-09 03:29:04.614608 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-09 03:29:04.614624 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-09 03:29:04.614640 | orchestrator | 2026-04-09 03:29:04.614655 | orchestrator | 2026-04-09 03:29:04.614671 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 03:29:04.614687 | orchestrator | Thursday 09 April 2026 03:29:04 +0000 (0:00:00.503) 0:08:57.463 ******** 2026-04-09 03:29:04.614703 | orchestrator | =============================================================================== 2026-04-09 03:29:04.614719 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 41.51s 2026-04-09 03:29:04.614735 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.92s 2026-04-09 03:29:04.614751 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 31.63s 2026-04-09 03:29:04.614799 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 28.07s 2026-04-09 03:29:04.614813 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.47s 2026-04-09 03:29:04.614828 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.31s 2026-04-09 03:29:04.614843 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 20.13s 2026-04-09 03:29:04.614874 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 19.84s 2026-04-09 03:29:04.614890 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.71s 2026-04-09 03:29:04.614904 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.85s 2026-04-09 03:29:04.614919 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.61s 2026-04-09 03:29:04.614932 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.05s 2026-04-09 03:29:04.614947 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.26s 2026-04-09 03:29:04.614962 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.18s 2026-04-09 03:29:04.614991 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.37s 2026-04-09 03:29:04.615008 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.30s 2026-04-09 03:29:04.615024 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.31s 2026-04-09 03:29:04.615114 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.57s 2026-04-09 03:29:04.615128 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 7.38s 2026-04-09 03:29:04.615143 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.30s 2026-04-09 03:29:07.189895 | orchestrator | 2026-04-09 03:29:07 | INFO  | Task 0836055e-4d58-4f1e-8197-a52608e8bcbe (horizon) was prepared for execution. 2026-04-09 03:29:07.189988 | orchestrator | 2026-04-09 03:29:07 | INFO  | It takes a moment until task 0836055e-4d58-4f1e-8197-a52608e8bcbe (horizon) has been started and output is visible here. 2026-04-09 03:29:14.924991 | orchestrator | 2026-04-09 03:29:14.925103 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 03:29:14.925114 | orchestrator | 2026-04-09 03:29:14.925122 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 03:29:14.925130 | orchestrator | Thursday 09 April 2026 03:29:11 +0000 (0:00:00.300) 0:00:00.300 ******** 2026-04-09 03:29:14.925137 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:29:14.925146 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:29:14.925153 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:29:14.925160 | orchestrator | 2026-04-09 03:29:14.925168 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 03:29:14.925175 | orchestrator | Thursday 09 April 2026 03:29:12 +0000 (0:00:00.338) 0:00:00.639 ******** 2026-04-09 03:29:14.925183 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-09 03:29:14.925191 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-09 03:29:14.925198 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-09 03:29:14.925205 | orchestrator | 2026-04-09 03:29:14.925213 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-09 03:29:14.925220 | orchestrator | 2026-04-09 03:29:14.925227 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-09 03:29:14.925235 | orchestrator | Thursday 09 April 2026 03:29:12 +0000 (0:00:00.472) 0:00:01.111 ******** 2026-04-09 03:29:14.925242 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:29:14.925250 | orchestrator | 2026-04-09 03:29:14.925257 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-09 03:29:14.925264 | orchestrator | Thursday 09 April 2026 03:29:13 +0000 (0:00:00.553) 0:00:01.665 ******** 2026-04-09 03:29:14.925314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 03:29:14.925341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 03:29:14.925369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 03:29:14.925385 | orchestrator | 2026-04-09 03:29:14.925392 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-09 03:29:14.925398 | orchestrator | Thursday 09 April 2026 03:29:14 +0000 (0:00:01.195) 0:00:02.860 ******** 2026-04-09 03:29:14.925405 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:29:14.925411 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:29:14.925418 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:29:14.925424 | orchestrator | 2026-04-09 03:29:14.925431 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-09 03:29:14.925437 | orchestrator | Thursday 09 April 2026 03:29:14 +0000 (0:00:00.509) 0:00:03.370 ******** 2026-04-09 03:29:14.925448 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-09 03:29:21.258465 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-09 03:29:21.258560 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-09 03:29:21.258571 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-09 03:29:21.258579 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-09 03:29:21.258587 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-09 03:29:21.258595 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-09 03:29:21.258603 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-09 03:29:21.258630 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-09 03:29:21.258638 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-09 03:29:21.258645 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-09 03:29:21.258652 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-09 03:29:21.258659 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-09 03:29:21.258667 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-09 03:29:21.258674 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-09 03:29:21.258681 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-09 03:29:21.258688 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-09 03:29:21.258695 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-09 03:29:21.258702 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-09 03:29:21.258709 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-09 03:29:21.258716 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-09 03:29:21.258723 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-09 03:29:21.258731 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-09 03:29:21.258738 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-09 03:29:21.258747 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-09 03:29:21.258756 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-09 03:29:21.258763 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-09 03:29:21.258771 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-09 03:29:21.258790 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-09 03:29:21.258797 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-09 03:29:21.258805 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-09 03:29:21.258812 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-09 03:29:21.258819 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-09 03:29:21.258828 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-09 03:29:21.258835 | orchestrator | 2026-04-09 03:29:21.258843 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 03:29:21.258852 | orchestrator | Thursday 09 April 2026 03:29:15 +0000 (0:00:00.832) 0:00:04.202 ******** 2026-04-09 03:29:21.258859 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:29:21.258874 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:29:21.258881 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:29:21.258888 | orchestrator | 2026-04-09 03:29:21.258896 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 03:29:21.258903 | orchestrator | Thursday 09 April 2026 03:29:15 +0000 (0:00:00.335) 0:00:04.537 ******** 2026-04-09 03:29:21.258910 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:21.258919 | orchestrator | 2026-04-09 03:29:21.258942 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 03:29:21.258956 | orchestrator | Thursday 09 April 2026 03:29:16 +0000 (0:00:00.356) 0:00:04.893 ******** 2026-04-09 03:29:21.258969 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:21.258981 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:29:21.258995 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:29:21.259007 | orchestrator | 2026-04-09 03:29:21.259084 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 03:29:21.259098 | orchestrator | Thursday 09 April 2026 03:29:16 +0000 (0:00:00.321) 0:00:05.215 ******** 2026-04-09 03:29:21.259109 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:29:21.259125 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:29:21.259143 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:29:21.259155 | orchestrator | 2026-04-09 03:29:21.259167 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 03:29:21.259179 | orchestrator | Thursday 09 April 2026 03:29:16 +0000 (0:00:00.363) 0:00:05.579 ******** 2026-04-09 03:29:21.259191 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:21.259203 | orchestrator | 2026-04-09 03:29:21.259216 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 03:29:21.259228 | orchestrator | Thursday 09 April 2026 03:29:17 +0000 (0:00:00.134) 0:00:05.713 ******** 2026-04-09 03:29:21.259240 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:21.259253 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:29:21.259265 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:29:21.259277 | orchestrator | 2026-04-09 03:29:21.259289 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 03:29:21.259302 | orchestrator | Thursday 09 April 2026 03:29:17 +0000 (0:00:00.302) 0:00:06.015 ******** 2026-04-09 03:29:21.259315 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:29:21.259326 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:29:21.259337 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:29:21.259349 | orchestrator | 2026-04-09 03:29:21.259363 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 03:29:21.259375 | orchestrator | Thursday 09 April 2026 03:29:17 +0000 (0:00:00.536) 0:00:06.551 ******** 2026-04-09 03:29:21.259388 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:21.259401 | orchestrator | 2026-04-09 03:29:21.259414 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 03:29:21.259427 | orchestrator | Thursday 09 April 2026 03:29:18 +0000 (0:00:00.141) 0:00:06.693 ******** 2026-04-09 03:29:21.259441 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:21.259454 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:29:21.259462 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:29:21.259469 | orchestrator | 2026-04-09 03:29:21.259476 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 03:29:21.259483 | orchestrator | Thursday 09 April 2026 03:29:18 +0000 (0:00:00.325) 0:00:07.018 ******** 2026-04-09 03:29:21.259491 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:29:21.259498 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:29:21.259505 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:29:21.259512 | orchestrator | 2026-04-09 03:29:21.259519 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 03:29:21.259526 | orchestrator | Thursday 09 April 2026 03:29:18 +0000 (0:00:00.327) 0:00:07.346 ******** 2026-04-09 03:29:21.259533 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:21.259550 | orchestrator | 2026-04-09 03:29:21.259557 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 03:29:21.259565 | orchestrator | Thursday 09 April 2026 03:29:18 +0000 (0:00:00.139) 0:00:07.485 ******** 2026-04-09 03:29:21.259572 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:21.259579 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:29:21.259586 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:29:21.259593 | orchestrator | 2026-04-09 03:29:21.259600 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 03:29:21.259607 | orchestrator | Thursday 09 April 2026 03:29:19 +0000 (0:00:00.547) 0:00:08.033 ******** 2026-04-09 03:29:21.259615 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:29:21.259622 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:29:21.259636 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:29:21.259644 | orchestrator | 2026-04-09 03:29:21.259651 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 03:29:21.259658 | orchestrator | Thursday 09 April 2026 03:29:19 +0000 (0:00:00.330) 0:00:08.363 ******** 2026-04-09 03:29:21.259665 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:21.259672 | orchestrator | 2026-04-09 03:29:21.259679 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 03:29:21.259686 | orchestrator | Thursday 09 April 2026 03:29:19 +0000 (0:00:00.132) 0:00:08.496 ******** 2026-04-09 03:29:21.259694 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:21.259701 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:29:21.259708 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:29:21.259715 | orchestrator | 2026-04-09 03:29:21.259722 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 03:29:21.259730 | orchestrator | Thursday 09 April 2026 03:29:20 +0000 (0:00:00.308) 0:00:08.805 ******** 2026-04-09 03:29:21.259737 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:29:21.259744 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:29:21.259751 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:29:21.259758 | orchestrator | 2026-04-09 03:29:21.259765 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 03:29:21.259773 | orchestrator | Thursday 09 April 2026 03:29:20 +0000 (0:00:00.335) 0:00:09.141 ******** 2026-04-09 03:29:21.259780 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:21.259804 | orchestrator | 2026-04-09 03:29:21.259812 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 03:29:21.259819 | orchestrator | Thursday 09 April 2026 03:29:20 +0000 (0:00:00.357) 0:00:09.498 ******** 2026-04-09 03:29:21.259826 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:21.259834 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:29:21.259841 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:29:21.259848 | orchestrator | 2026-04-09 03:29:21.259855 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 03:29:21.259872 | orchestrator | Thursday 09 April 2026 03:29:21 +0000 (0:00:00.346) 0:00:09.845 ******** 2026-04-09 03:29:36.288786 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:29:36.288869 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:29:36.288877 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:29:36.288884 | orchestrator | 2026-04-09 03:29:36.288890 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 03:29:36.288897 | orchestrator | Thursday 09 April 2026 03:29:21 +0000 (0:00:00.353) 0:00:10.199 ******** 2026-04-09 03:29:36.288903 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:36.288910 | orchestrator | 2026-04-09 03:29:36.288915 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 03:29:36.288921 | orchestrator | Thursday 09 April 2026 03:29:21 +0000 (0:00:00.146) 0:00:10.345 ******** 2026-04-09 03:29:36.288927 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:36.288932 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:29:36.288938 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:29:36.288943 | orchestrator | 2026-04-09 03:29:36.288949 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 03:29:36.288973 | orchestrator | Thursday 09 April 2026 03:29:22 +0000 (0:00:00.335) 0:00:10.681 ******** 2026-04-09 03:29:36.288979 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:29:36.288985 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:29:36.288991 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:29:36.288996 | orchestrator | 2026-04-09 03:29:36.289001 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 03:29:36.289007 | orchestrator | Thursday 09 April 2026 03:29:22 +0000 (0:00:00.600) 0:00:11.281 ******** 2026-04-09 03:29:36.289013 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:36.289018 | orchestrator | 2026-04-09 03:29:36.289024 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 03:29:36.289029 | orchestrator | Thursday 09 April 2026 03:29:22 +0000 (0:00:00.140) 0:00:11.421 ******** 2026-04-09 03:29:36.289034 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:36.289040 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:29:36.289045 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:29:36.289051 | orchestrator | 2026-04-09 03:29:36.289056 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 03:29:36.289061 | orchestrator | Thursday 09 April 2026 03:29:23 +0000 (0:00:00.332) 0:00:11.754 ******** 2026-04-09 03:29:36.289067 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:29:36.289072 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:29:36.289077 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:29:36.289083 | orchestrator | 2026-04-09 03:29:36.289088 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 03:29:36.289094 | orchestrator | Thursday 09 April 2026 03:29:23 +0000 (0:00:00.374) 0:00:12.129 ******** 2026-04-09 03:29:36.289099 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:36.289104 | orchestrator | 2026-04-09 03:29:36.289110 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 03:29:36.289115 | orchestrator | Thursday 09 April 2026 03:29:23 +0000 (0:00:00.146) 0:00:12.275 ******** 2026-04-09 03:29:36.289187 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:36.289193 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:29:36.289199 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:29:36.289204 | orchestrator | 2026-04-09 03:29:36.289210 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 03:29:36.289215 | orchestrator | Thursday 09 April 2026 03:29:24 +0000 (0:00:00.540) 0:00:12.815 ******** 2026-04-09 03:29:36.289220 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:29:36.289225 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:29:36.289231 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:29:36.289236 | orchestrator | 2026-04-09 03:29:36.289241 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 03:29:36.289247 | orchestrator | Thursday 09 April 2026 03:29:24 +0000 (0:00:00.356) 0:00:13.172 ******** 2026-04-09 03:29:36.289252 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:36.289257 | orchestrator | 2026-04-09 03:29:36.289262 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 03:29:36.289268 | orchestrator | Thursday 09 April 2026 03:29:24 +0000 (0:00:00.151) 0:00:13.324 ******** 2026-04-09 03:29:36.289286 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:36.289291 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:29:36.289296 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:29:36.289302 | orchestrator | 2026-04-09 03:29:36.289307 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-09 03:29:36.289313 | orchestrator | Thursday 09 April 2026 03:29:25 +0000 (0:00:00.356) 0:00:13.681 ******** 2026-04-09 03:29:36.289319 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:29:36.289324 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:29:36.289329 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:29:36.289335 | orchestrator | 2026-04-09 03:29:36.289340 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-09 03:29:36.289353 | orchestrator | Thursday 09 April 2026 03:29:26 +0000 (0:00:01.857) 0:00:15.538 ******** 2026-04-09 03:29:36.289360 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-09 03:29:36.289367 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-09 03:29:36.289374 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-09 03:29:36.289380 | orchestrator | 2026-04-09 03:29:36.289389 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-09 03:29:36.289398 | orchestrator | Thursday 09 April 2026 03:29:28 +0000 (0:00:01.964) 0:00:17.502 ******** 2026-04-09 03:29:36.289407 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-09 03:29:36.289417 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-09 03:29:36.289427 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-09 03:29:36.289436 | orchestrator | 2026-04-09 03:29:36.289446 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-09 03:29:36.289472 | orchestrator | Thursday 09 April 2026 03:29:30 +0000 (0:00:01.849) 0:00:19.352 ******** 2026-04-09 03:29:36.289481 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-09 03:29:36.289490 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-09 03:29:36.289499 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-09 03:29:36.289509 | orchestrator | 2026-04-09 03:29:36.289518 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-09 03:29:36.289526 | orchestrator | Thursday 09 April 2026 03:29:32 +0000 (0:00:01.755) 0:00:21.107 ******** 2026-04-09 03:29:36.289535 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:36.289545 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:29:36.289555 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:29:36.289564 | orchestrator | 2026-04-09 03:29:36.289573 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-09 03:29:36.289582 | orchestrator | Thursday 09 April 2026 03:29:33 +0000 (0:00:00.561) 0:00:21.669 ******** 2026-04-09 03:29:36.289592 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:36.289601 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:29:36.289610 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:29:36.289620 | orchestrator | 2026-04-09 03:29:36.289629 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-09 03:29:36.289639 | orchestrator | Thursday 09 April 2026 03:29:33 +0000 (0:00:00.325) 0:00:21.994 ******** 2026-04-09 03:29:36.289648 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:29:36.289658 | orchestrator | 2026-04-09 03:29:36.289666 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-09 03:29:36.289674 | orchestrator | Thursday 09 April 2026 03:29:34 +0000 (0:00:00.906) 0:00:22.900 ******** 2026-04-09 03:29:36.289697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 03:29:36.289728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 03:29:37.031164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 03:29:37.031341 | orchestrator | 2026-04-09 03:29:37.031376 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-09 03:29:37.031395 | orchestrator | Thursday 09 April 2026 03:29:36 +0000 (0:00:01.972) 0:00:24.873 ******** 2026-04-09 03:29:37.031433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 03:29:37.031458 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:37.031482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 03:29:37.031498 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:29:37.031531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 03:29:39.944795 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:29:39.944900 | orchestrator | 2026-04-09 03:29:39.944924 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-09 03:29:39.944936 | orchestrator | Thursday 09 April 2026 03:29:37 +0000 (0:00:00.744) 0:00:25.617 ******** 2026-04-09 03:29:39.944966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 03:29:39.944980 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:29:39.945011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 03:29:39.945048 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:29:39.945059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 03:29:39.945069 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:29:39.945078 | orchestrator | 2026-04-09 03:29:39.945087 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-04-09 03:29:39.945096 | orchestrator | Thursday 09 April 2026 03:29:38 +0000 (0:00:01.024) 0:00:26.642 ******** 2026-04-09 03:29:39.945115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 03:30:29.555318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 03:30:29.555594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 03:30:29.555619 | orchestrator | 2026-04-09 03:30:29.555631 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-09 03:30:29.555642 | orchestrator | Thursday 09 April 2026 03:29:39 +0000 (0:00:01.889) 0:00:28.532 ******** 2026-04-09 03:30:29.555652 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:30:29.555663 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:30:29.555673 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:30:29.555682 | orchestrator | 2026-04-09 03:30:29.555692 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-09 03:30:29.555702 | orchestrator | Thursday 09 April 2026 03:29:40 +0000 (0:00:00.352) 0:00:28.884 ******** 2026-04-09 03:30:29.555713 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:30:29.555722 | orchestrator | 2026-04-09 03:30:29.555732 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-09 03:30:29.555742 | orchestrator | Thursday 09 April 2026 03:29:40 +0000 (0:00:00.584) 0:00:29.469 ******** 2026-04-09 03:30:29.555752 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:30:29.555761 | orchestrator | 2026-04-09 03:30:29.555777 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-04-09 03:30:29.555796 | orchestrator | Thursday 09 April 2026 03:29:43 +0000 (0:00:02.178) 0:00:31.647 ******** 2026-04-09 03:30:29.555822 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:30:29.555836 | orchestrator | 2026-04-09 03:30:29.555851 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-09 03:30:29.555867 | orchestrator | Thursday 09 April 2026 03:29:45 +0000 (0:00:02.793) 0:00:34.441 ******** 2026-04-09 03:30:29.555887 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:30:29.555918 | orchestrator | 2026-04-09 03:30:29.555934 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-09 03:30:29.555949 | orchestrator | Thursday 09 April 2026 03:30:02 +0000 (0:00:16.407) 0:00:50.848 ******** 2026-04-09 03:30:29.555965 | orchestrator | 2026-04-09 03:30:29.555980 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-09 03:30:29.555995 | orchestrator | Thursday 09 April 2026 03:30:02 +0000 (0:00:00.098) 0:00:50.947 ******** 2026-04-09 03:30:29.556010 | orchestrator | 2026-04-09 03:30:29.556026 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-09 03:30:29.556042 | orchestrator | Thursday 09 April 2026 03:30:02 +0000 (0:00:00.083) 0:00:51.030 ******** 2026-04-09 03:30:29.556059 | orchestrator | 2026-04-09 03:30:29.556076 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-09 03:30:29.556093 | orchestrator | Thursday 09 April 2026 03:30:02 +0000 (0:00:00.075) 0:00:51.106 ******** 2026-04-09 03:30:29.556111 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:30:29.556129 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:30:29.556145 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:30:29.556161 | orchestrator | 2026-04-09 03:30:29.556177 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 03:30:29.556195 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-09 03:30:29.556213 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-09 03:30:29.556229 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-09 03:30:29.556245 | orchestrator | 2026-04-09 03:30:29.556261 | orchestrator | 2026-04-09 03:30:29.556277 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 03:30:29.556295 | orchestrator | Thursday 09 April 2026 03:30:29 +0000 (0:00:27.014) 0:01:18.121 ******** 2026-04-09 03:30:29.556311 | orchestrator | =============================================================================== 2026-04-09 03:30:29.556329 | orchestrator | horizon : Restart horizon container ------------------------------------ 27.02s 2026-04-09 03:30:29.556347 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.41s 2026-04-09 03:30:29.556364 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.79s 2026-04-09 03:30:29.556382 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.18s 2026-04-09 03:30:29.556399 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.97s 2026-04-09 03:30:29.556427 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.96s 2026-04-09 03:30:29.556446 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.89s 2026-04-09 03:30:29.556462 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.86s 2026-04-09 03:30:29.556517 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.85s 2026-04-09 03:30:29.556537 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.76s 2026-04-09 03:30:29.556556 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.20s 2026-04-09 03:30:29.556574 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.03s 2026-04-09 03:30:29.556591 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.91s 2026-04-09 03:30:29.556624 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.83s 2026-04-09 03:30:30.010788 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.74s 2026-04-09 03:30:30.010895 | orchestrator | horizon : Update policy file name --------------------------------------- 0.60s 2026-04-09 03:30:30.010912 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.58s 2026-04-09 03:30:30.010953 | orchestrator | horizon : Copying over existing policy file ----------------------------- 0.56s 2026-04-09 03:30:30.010965 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.55s 2026-04-09 03:30:30.010976 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.55s 2026-04-09 03:30:32.734377 | orchestrator | 2026-04-09 03:30:32 | INFO  | Task c5764ed7-2d6a-45b6-8ff3-8c99b84bdd6d (skyline) was prepared for execution. 2026-04-09 03:30:32.734479 | orchestrator | 2026-04-09 03:30:32 | INFO  | It takes a moment until task c5764ed7-2d6a-45b6-8ff3-8c99b84bdd6d (skyline) has been started and output is visible here. 2026-04-09 03:31:03.821267 | orchestrator | 2026-04-09 03:31:03.821349 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 03:31:03.821357 | orchestrator | 2026-04-09 03:31:03.821363 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 03:31:03.821368 | orchestrator | Thursday 09 April 2026 03:30:37 +0000 (0:00:00.274) 0:00:00.274 ******** 2026-04-09 03:31:03.821373 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:31:03.821380 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:31:03.821385 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:31:03.821390 | orchestrator | 2026-04-09 03:31:03.821395 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 03:31:03.821400 | orchestrator | Thursday 09 April 2026 03:30:37 +0000 (0:00:00.348) 0:00:00.623 ******** 2026-04-09 03:31:03.821405 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-04-09 03:31:03.821410 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-04-09 03:31:03.821415 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-04-09 03:31:03.821420 | orchestrator | 2026-04-09 03:31:03.821425 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-04-09 03:31:03.821430 | orchestrator | 2026-04-09 03:31:03.821434 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-09 03:31:03.821439 | orchestrator | Thursday 09 April 2026 03:30:38 +0000 (0:00:00.467) 0:00:01.091 ******** 2026-04-09 03:31:03.821445 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:31:03.821450 | orchestrator | 2026-04-09 03:31:03.821455 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-04-09 03:31:03.821460 | orchestrator | Thursday 09 April 2026 03:30:38 +0000 (0:00:00.601) 0:00:01.692 ******** 2026-04-09 03:31:03.821464 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-04-09 03:31:03.821469 | orchestrator | 2026-04-09 03:31:03.821474 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-04-09 03:31:03.821479 | orchestrator | Thursday 09 April 2026 03:30:42 +0000 (0:00:03.446) 0:00:05.139 ******** 2026-04-09 03:31:03.821484 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-04-09 03:31:03.821489 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-04-09 03:31:03.821494 | orchestrator | 2026-04-09 03:31:03.821498 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-04-09 03:31:03.821503 | orchestrator | Thursday 09 April 2026 03:30:48 +0000 (0:00:06.337) 0:00:11.476 ******** 2026-04-09 03:31:03.821508 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 03:31:03.821514 | orchestrator | 2026-04-09 03:31:03.821518 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-04-09 03:31:03.821524 | orchestrator | Thursday 09 April 2026 03:30:51 +0000 (0:00:03.120) 0:00:14.597 ******** 2026-04-09 03:31:03.821529 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 03:31:03.821534 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-04-09 03:31:03.821538 | orchestrator | 2026-04-09 03:31:03.821543 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-04-09 03:31:03.821567 | orchestrator | Thursday 09 April 2026 03:30:55 +0000 (0:00:04.028) 0:00:18.626 ******** 2026-04-09 03:31:03.821573 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 03:31:03.821578 | orchestrator | 2026-04-09 03:31:03.821583 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-04-09 03:31:03.821587 | orchestrator | Thursday 09 April 2026 03:30:58 +0000 (0:00:03.205) 0:00:21.831 ******** 2026-04-09 03:31:03.821592 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-04-09 03:31:03.821597 | orchestrator | 2026-04-09 03:31:03.821612 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-04-09 03:31:03.821617 | orchestrator | Thursday 09 April 2026 03:31:02 +0000 (0:00:03.683) 0:00:25.515 ******** 2026-04-09 03:31:03.821625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:03.821644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:03.821650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:03.821656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:03.821671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:03.821681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:07.681853 | orchestrator | 2026-04-09 03:31:07.681955 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-09 03:31:07.681972 | orchestrator | Thursday 09 April 2026 03:31:03 +0000 (0:00:01.341) 0:00:26.856 ******** 2026-04-09 03:31:07.681985 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:31:07.681997 | orchestrator | 2026-04-09 03:31:07.682008 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-04-09 03:31:07.682071 | orchestrator | Thursday 09 April 2026 03:31:04 +0000 (0:00:00.786) 0:00:27.643 ******** 2026-04-09 03:31:07.682087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:07.682137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:07.682178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:07.682224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:07.682247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:07.682268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:07.682300 | orchestrator | 2026-04-09 03:31:07.682319 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-04-09 03:31:07.682334 | orchestrator | Thursday 09 April 2026 03:31:07 +0000 (0:00:02.462) 0:00:30.105 ******** 2026-04-09 03:31:07.682352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-09 03:31:07.682364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-09 03:31:07.682376 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:31:07.682399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-09 03:31:09.147717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-09 03:31:09.147886 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:31:09.147927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-09 03:31:09.147945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-09 03:31:09.147962 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:31:09.147977 | orchestrator | 2026-04-09 03:31:09.147994 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-04-09 03:31:09.148006 | orchestrator | Thursday 09 April 2026 03:31:07 +0000 (0:00:00.614) 0:00:30.720 ******** 2026-04-09 03:31:09.148015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-09 03:31:09.148051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-09 03:31:09.148061 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:31:09.148076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-09 03:31:09.148085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-09 03:31:09.148094 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:31:09.148103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-09 03:31:09.148137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-09 03:31:18.077633 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:31:18.077723 | orchestrator | 2026-04-09 03:31:18.077733 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-04-09 03:31:18.077741 | orchestrator | Thursday 09 April 2026 03:31:09 +0000 (0:00:01.463) 0:00:32.183 ******** 2026-04-09 03:31:18.077762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:18.077772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:18.077886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:18.077914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:18.077941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:18.077958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:18.077969 | orchestrator | 2026-04-09 03:31:18.077979 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-04-09 03:31:18.077989 | orchestrator | Thursday 09 April 2026 03:31:11 +0000 (0:00:02.607) 0:00:34.791 ******** 2026-04-09 03:31:18.077999 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-09 03:31:18.078010 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-09 03:31:18.078074 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-09 03:31:18.078081 | orchestrator | 2026-04-09 03:31:18.078087 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-04-09 03:31:18.078094 | orchestrator | Thursday 09 April 2026 03:31:13 +0000 (0:00:01.682) 0:00:36.473 ******** 2026-04-09 03:31:18.078100 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-09 03:31:18.078106 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-09 03:31:18.078120 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-09 03:31:18.078126 | orchestrator | 2026-04-09 03:31:18.078132 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-04-09 03:31:18.078138 | orchestrator | Thursday 09 April 2026 03:31:15 +0000 (0:00:02.259) 0:00:38.733 ******** 2026-04-09 03:31:18.078145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:18.078160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:20.292132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:20.292255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:20.292290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:20.292300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:20.292308 | orchestrator | 2026-04-09 03:31:20.292319 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-04-09 03:31:20.292329 | orchestrator | Thursday 09 April 2026 03:31:18 +0000 (0:00:02.384) 0:00:41.118 ******** 2026-04-09 03:31:20.292337 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:31:20.292348 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:31:20.292357 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:31:20.292364 | orchestrator | 2026-04-09 03:31:20.292388 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-04-09 03:31:20.292396 | orchestrator | Thursday 09 April 2026 03:31:18 +0000 (0:00:00.311) 0:00:41.429 ******** 2026-04-09 03:31:20.292412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:20.292423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:20.292441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:20.292450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:20.292471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:59.385429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-09 03:31:59.385533 | orchestrator | 2026-04-09 03:31:59.385542 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-04-09 03:31:59.385548 | orchestrator | Thursday 09 April 2026 03:31:20 +0000 (0:00:01.899) 0:00:43.329 ******** 2026-04-09 03:31:59.385553 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:31:59.385558 | orchestrator | 2026-04-09 03:31:59.385562 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-04-09 03:31:59.385566 | orchestrator | Thursday 09 April 2026 03:31:22 +0000 (0:00:02.110) 0:00:45.439 ******** 2026-04-09 03:31:59.385570 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:31:59.385574 | orchestrator | 2026-04-09 03:31:59.385578 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-04-09 03:31:59.385583 | orchestrator | Thursday 09 April 2026 03:31:24 +0000 (0:00:02.208) 0:00:47.648 ******** 2026-04-09 03:31:59.385587 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:31:59.385591 | orchestrator | 2026-04-09 03:31:59.385595 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-09 03:31:59.385599 | orchestrator | Thursday 09 April 2026 03:31:32 +0000 (0:00:07.765) 0:00:55.413 ******** 2026-04-09 03:31:59.385604 | orchestrator | 2026-04-09 03:31:59.385608 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-09 03:31:59.385612 | orchestrator | Thursday 09 April 2026 03:31:32 +0000 (0:00:00.071) 0:00:55.485 ******** 2026-04-09 03:31:59.385616 | orchestrator | 2026-04-09 03:31:59.385620 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-09 03:31:59.385624 | orchestrator | Thursday 09 April 2026 03:31:32 +0000 (0:00:00.073) 0:00:55.558 ******** 2026-04-09 03:31:59.385628 | orchestrator | 2026-04-09 03:31:59.385632 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-04-09 03:31:59.385637 | orchestrator | Thursday 09 April 2026 03:31:32 +0000 (0:00:00.083) 0:00:55.642 ******** 2026-04-09 03:31:59.385641 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:31:59.385645 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:31:59.385649 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:31:59.385653 | orchestrator | 2026-04-09 03:31:59.385657 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-04-09 03:31:59.385661 | orchestrator | Thursday 09 April 2026 03:31:43 +0000 (0:00:11.206) 0:01:06.849 ******** 2026-04-09 03:31:59.385665 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:31:59.385670 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:31:59.385674 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:31:59.385678 | orchestrator | 2026-04-09 03:31:59.385682 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 03:31:59.385687 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 03:31:59.385693 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 03:31:59.385697 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 03:31:59.385701 | orchestrator | 2026-04-09 03:31:59.385705 | orchestrator | 2026-04-09 03:31:59.385709 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 03:31:59.385714 | orchestrator | Thursday 09 April 2026 03:31:58 +0000 (0:00:15.191) 0:01:22.040 ******** 2026-04-09 03:31:59.385720 | orchestrator | =============================================================================== 2026-04-09 03:31:59.385732 | orchestrator | skyline : Restart skyline-console container ---------------------------- 15.19s 2026-04-09 03:31:59.385739 | orchestrator | skyline : Restart skyline-apiserver container -------------------------- 11.21s 2026-04-09 03:31:59.385746 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 7.77s 2026-04-09 03:31:59.385753 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.34s 2026-04-09 03:31:59.385774 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 4.03s 2026-04-09 03:31:59.385782 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.68s 2026-04-09 03:31:59.385791 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.45s 2026-04-09 03:31:59.385795 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.21s 2026-04-09 03:31:59.385810 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.12s 2026-04-09 03:31:59.385814 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.61s 2026-04-09 03:31:59.385820 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.46s 2026-04-09 03:31:59.385827 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.38s 2026-04-09 03:31:59.385833 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.26s 2026-04-09 03:31:59.385840 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.21s 2026-04-09 03:31:59.385847 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.11s 2026-04-09 03:31:59.385854 | orchestrator | skyline : Check skyline container --------------------------------------- 1.90s 2026-04-09 03:31:59.385861 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.68s 2026-04-09 03:31:59.385868 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.46s 2026-04-09 03:31:59.385875 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.34s 2026-04-09 03:31:59.385882 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.79s 2026-04-09 03:32:02.099171 | orchestrator | 2026-04-09 03:32:02 | INFO  | Task 350cc51b-5d31-4c77-9946-181bfee4dd0d (glance) was prepared for execution. 2026-04-09 03:32:02.099271 | orchestrator | 2026-04-09 03:32:02 | INFO  | It takes a moment until task 350cc51b-5d31-4c77-9946-181bfee4dd0d (glance) has been started and output is visible here. 2026-04-09 03:32:36.817492 | orchestrator | 2026-04-09 03:32:36.817609 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 03:32:36.817627 | orchestrator | 2026-04-09 03:32:36.817639 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 03:32:36.817651 | orchestrator | Thursday 09 April 2026 03:32:06 +0000 (0:00:00.307) 0:00:00.307 ******** 2026-04-09 03:32:36.817662 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:32:36.817675 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:32:36.817685 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:32:36.817723 | orchestrator | 2026-04-09 03:32:36.817735 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 03:32:36.817747 | orchestrator | Thursday 09 April 2026 03:32:07 +0000 (0:00:00.335) 0:00:00.643 ******** 2026-04-09 03:32:36.817758 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-09 03:32:36.817770 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-09 03:32:36.817781 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-09 03:32:36.817792 | orchestrator | 2026-04-09 03:32:36.817803 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-09 03:32:36.817815 | orchestrator | 2026-04-09 03:32:36.817826 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-09 03:32:36.817837 | orchestrator | Thursday 09 April 2026 03:32:07 +0000 (0:00:00.488) 0:00:01.131 ******** 2026-04-09 03:32:36.817874 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:32:36.817887 | orchestrator | 2026-04-09 03:32:36.817920 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-04-09 03:32:36.817932 | orchestrator | Thursday 09 April 2026 03:32:08 +0000 (0:00:00.602) 0:00:01.734 ******** 2026-04-09 03:32:36.817943 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-04-09 03:32:36.817954 | orchestrator | 2026-04-09 03:32:36.817964 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-04-09 03:32:36.817975 | orchestrator | Thursday 09 April 2026 03:32:11 +0000 (0:00:03.422) 0:00:05.157 ******** 2026-04-09 03:32:36.817986 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-04-09 03:32:36.817997 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-04-09 03:32:36.818008 | orchestrator | 2026-04-09 03:32:36.818083 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-04-09 03:32:36.818096 | orchestrator | Thursday 09 April 2026 03:32:18 +0000 (0:00:06.426) 0:00:11.583 ******** 2026-04-09 03:32:36.818165 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 03:32:36.818187 | orchestrator | 2026-04-09 03:32:36.818231 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-04-09 03:32:36.818253 | orchestrator | Thursday 09 April 2026 03:32:21 +0000 (0:00:03.195) 0:00:14.778 ******** 2026-04-09 03:32:36.818272 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 03:32:36.818292 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-04-09 03:32:36.818312 | orchestrator | 2026-04-09 03:32:36.818333 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-04-09 03:32:36.818352 | orchestrator | Thursday 09 April 2026 03:32:25 +0000 (0:00:04.008) 0:00:18.786 ******** 2026-04-09 03:32:36.818371 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 03:32:36.818383 | orchestrator | 2026-04-09 03:32:36.818393 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-04-09 03:32:36.818419 | orchestrator | Thursday 09 April 2026 03:32:28 +0000 (0:00:03.161) 0:00:21.948 ******** 2026-04-09 03:32:36.818430 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-04-09 03:32:36.818440 | orchestrator | 2026-04-09 03:32:36.818451 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-09 03:32:36.818462 | orchestrator | Thursday 09 April 2026 03:32:32 +0000 (0:00:03.839) 0:00:25.788 ******** 2026-04-09 03:32:36.818503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 03:32:36.818535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 03:32:36.818554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 03:32:36.818566 | orchestrator | 2026-04-09 03:32:36.818577 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-09 03:32:36.818588 | orchestrator | Thursday 09 April 2026 03:32:36 +0000 (0:00:03.805) 0:00:29.594 ******** 2026-04-09 03:32:36.818599 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:32:36.818628 | orchestrator | 2026-04-09 03:32:36.818668 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-09 03:32:53.149996 | orchestrator | Thursday 09 April 2026 03:32:36 +0000 (0:00:00.748) 0:00:30.342 ******** 2026-04-09 03:32:53.150131 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:32:53.150142 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:32:53.150149 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:32:53.150156 | orchestrator | 2026-04-09 03:32:53.150178 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-09 03:32:53.150193 | orchestrator | Thursday 09 April 2026 03:32:40 +0000 (0:00:03.756) 0:00:34.098 ******** 2026-04-09 03:32:53.150201 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-09 03:32:53.150208 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-09 03:32:53.150215 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-09 03:32:53.150221 | orchestrator | 2026-04-09 03:32:53.150228 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-09 03:32:53.150234 | orchestrator | Thursday 09 April 2026 03:32:42 +0000 (0:00:01.669) 0:00:35.768 ******** 2026-04-09 03:32:53.150241 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-09 03:32:53.150247 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-09 03:32:53.150253 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-09 03:32:53.150259 | orchestrator | 2026-04-09 03:32:53.150265 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-09 03:32:53.150271 | orchestrator | Thursday 09 April 2026 03:32:43 +0000 (0:00:01.448) 0:00:37.217 ******** 2026-04-09 03:32:53.150278 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:32:53.150302 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:32:53.150309 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:32:53.150315 | orchestrator | 2026-04-09 03:32:53.150321 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-09 03:32:53.150327 | orchestrator | Thursday 09 April 2026 03:32:44 +0000 (0:00:00.670) 0:00:37.887 ******** 2026-04-09 03:32:53.150333 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:32:53.150339 | orchestrator | 2026-04-09 03:32:53.150346 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-09 03:32:53.150353 | orchestrator | Thursday 09 April 2026 03:32:44 +0000 (0:00:00.130) 0:00:38.018 ******** 2026-04-09 03:32:53.150359 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:32:53.150365 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:32:53.150371 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:32:53.150377 | orchestrator | 2026-04-09 03:32:53.150383 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-09 03:32:53.150389 | orchestrator | Thursday 09 April 2026 03:32:44 +0000 (0:00:00.319) 0:00:38.338 ******** 2026-04-09 03:32:53.150395 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:32:53.150402 | orchestrator | 2026-04-09 03:32:53.150408 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-09 03:32:53.150414 | orchestrator | Thursday 09 April 2026 03:32:45 +0000 (0:00:00.777) 0:00:39.115 ******** 2026-04-09 03:32:53.150439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 03:32:53.150484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 03:32:53.150497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 03:32:53.150511 | orchestrator | 2026-04-09 03:32:53.150517 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-09 03:32:53.150523 | orchestrator | Thursday 09 April 2026 03:32:49 +0000 (0:00:04.175) 0:00:43.290 ******** 2026-04-09 03:32:53.150536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 03:32:57.199847 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:32:57.199992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 03:32:57.200048 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:32:57.200064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 03:32:57.200078 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:32:57.200092 | orchestrator | 2026-04-09 03:32:57.200107 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-09 03:32:57.200122 | orchestrator | Thursday 09 April 2026 03:32:53 +0000 (0:00:03.384) 0:00:46.675 ******** 2026-04-09 03:32:57.200160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 03:32:57.200188 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:32:57.200210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 03:32:57.200224 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:32:57.200248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 03:33:36.901162 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:33:36.901278 | orchestrator | 2026-04-09 03:33:36.901314 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-09 03:33:36.901338 | orchestrator | Thursday 09 April 2026 03:32:57 +0000 (0:00:04.050) 0:00:50.725 ******** 2026-04-09 03:33:36.901375 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:33:36.901387 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:33:36.901398 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:33:36.901408 | orchestrator | 2026-04-09 03:33:36.901419 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-09 03:33:36.901430 | orchestrator | Thursday 09 April 2026 03:33:01 +0000 (0:00:03.969) 0:00:54.695 ******** 2026-04-09 03:33:36.901464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 03:33:36.901480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 03:33:36.901549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 03:33:36.901573 | orchestrator | 2026-04-09 03:33:36.901585 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-09 03:33:36.901596 | orchestrator | Thursday 09 April 2026 03:33:05 +0000 (0:00:04.497) 0:00:59.193 ******** 2026-04-09 03:33:36.901607 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:33:36.901618 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:33:36.901629 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:33:36.901639 | orchestrator | 2026-04-09 03:33:36.901650 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-09 03:33:36.901661 | orchestrator | Thursday 09 April 2026 03:33:13 +0000 (0:00:07.698) 0:01:06.891 ******** 2026-04-09 03:33:36.901672 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:33:36.901683 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:33:36.901696 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:33:36.901708 | orchestrator | 2026-04-09 03:33:36.901720 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-04-09 03:33:36.901732 | orchestrator | Thursday 09 April 2026 03:33:17 +0000 (0:00:03.714) 0:01:10.606 ******** 2026-04-09 03:33:36.901744 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:33:36.901756 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:33:36.901768 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:33:36.901781 | orchestrator | 2026-04-09 03:33:36.901793 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-09 03:33:36.901805 | orchestrator | Thursday 09 April 2026 03:33:20 +0000 (0:00:03.510) 0:01:14.116 ******** 2026-04-09 03:33:36.901817 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:33:36.901831 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:33:36.901844 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:33:36.901856 | orchestrator | 2026-04-09 03:33:36.901867 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-09 03:33:36.901878 | orchestrator | Thursday 09 April 2026 03:33:24 +0000 (0:00:03.597) 0:01:17.714 ******** 2026-04-09 03:33:36.901888 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:33:36.901899 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:33:36.901910 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:33:36.901921 | orchestrator | 2026-04-09 03:33:36.901932 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-09 03:33:36.901942 | orchestrator | Thursday 09 April 2026 03:33:27 +0000 (0:00:03.823) 0:01:21.537 ******** 2026-04-09 03:33:36.901953 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:33:36.901970 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:33:36.901981 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:33:36.901992 | orchestrator | 2026-04-09 03:33:36.902003 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-09 03:33:36.902014 | orchestrator | Thursday 09 April 2026 03:33:28 +0000 (0:00:00.561) 0:01:22.099 ******** 2026-04-09 03:33:36.902102 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-09 03:33:36.902116 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:33:36.902127 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-09 03:33:36.902138 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:33:36.902149 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-09 03:33:36.902160 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:33:36.902170 | orchestrator | 2026-04-09 03:33:36.902181 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-09 03:33:36.902192 | orchestrator | Thursday 09 April 2026 03:33:32 +0000 (0:00:03.828) 0:01:25.927 ******** 2026-04-09 03:33:36.902203 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:33:36.902214 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:33:36.902226 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:33:36.902245 | orchestrator | 2026-04-09 03:33:36.902266 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-04-09 03:33:36.902307 | orchestrator | Thursday 09 April 2026 03:33:36 +0000 (0:00:04.498) 0:01:30.426 ******** 2026-04-09 03:34:59.220395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 03:34:59.220505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 03:34:59.220568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 03:34:59.220582 | orchestrator | 2026-04-09 03:34:59.220592 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-09 03:34:59.220602 | orchestrator | Thursday 09 April 2026 03:33:40 +0000 (0:00:04.039) 0:01:34.466 ******** 2026-04-09 03:34:59.220611 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:34:59.220621 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:34:59.220630 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:34:59.220638 | orchestrator | 2026-04-09 03:34:59.220647 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-04-09 03:34:59.220656 | orchestrator | Thursday 09 April 2026 03:33:41 +0000 (0:00:00.547) 0:01:35.014 ******** 2026-04-09 03:34:59.220665 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:34:59.220673 | orchestrator | 2026-04-09 03:34:59.220682 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-04-09 03:34:59.220691 | orchestrator | Thursday 09 April 2026 03:33:43 +0000 (0:00:02.078) 0:01:37.093 ******** 2026-04-09 03:34:59.220699 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:34:59.220708 | orchestrator | 2026-04-09 03:34:59.220717 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-09 03:34:59.220725 | orchestrator | Thursday 09 April 2026 03:33:45 +0000 (0:00:02.283) 0:01:39.376 ******** 2026-04-09 03:34:59.220742 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:34:59.220750 | orchestrator | 2026-04-09 03:34:59.220759 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-04-09 03:34:59.220768 | orchestrator | Thursday 09 April 2026 03:33:47 +0000 (0:00:02.109) 0:01:41.486 ******** 2026-04-09 03:34:59.220776 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:34:59.220785 | orchestrator | 2026-04-09 03:34:59.220793 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-09 03:34:59.220802 | orchestrator | Thursday 09 April 2026 03:34:17 +0000 (0:00:29.364) 0:02:10.851 ******** 2026-04-09 03:34:59.220811 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:34:59.220819 | orchestrator | 2026-04-09 03:34:59.220828 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-09 03:34:59.220836 | orchestrator | Thursday 09 April 2026 03:34:19 +0000 (0:00:02.124) 0:02:12.975 ******** 2026-04-09 03:34:59.220845 | orchestrator | 2026-04-09 03:34:59.220854 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-09 03:34:59.220862 | orchestrator | Thursday 09 April 2026 03:34:19 +0000 (0:00:00.073) 0:02:13.049 ******** 2026-04-09 03:34:59.220871 | orchestrator | 2026-04-09 03:34:59.220879 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-09 03:34:59.220888 | orchestrator | Thursday 09 April 2026 03:34:19 +0000 (0:00:00.071) 0:02:13.121 ******** 2026-04-09 03:34:59.220896 | orchestrator | 2026-04-09 03:34:59.220905 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-09 03:34:59.220916 | orchestrator | Thursday 09 April 2026 03:34:19 +0000 (0:00:00.125) 0:02:13.246 ******** 2026-04-09 03:34:59.220925 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:34:59.221003 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:34:59.221020 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:34:59.221033 | orchestrator | 2026-04-09 03:34:59.221047 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 03:34:59.221062 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 03:34:59.221077 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-09 03:34:59.221090 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-09 03:34:59.221104 | orchestrator | 2026-04-09 03:34:59.221118 | orchestrator | 2026-04-09 03:34:59.221133 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 03:34:59.221146 | orchestrator | Thursday 09 April 2026 03:34:59 +0000 (0:00:39.489) 0:02:52.735 ******** 2026-04-09 03:34:59.221158 | orchestrator | =============================================================================== 2026-04-09 03:34:59.221171 | orchestrator | glance : Restart glance-api container ---------------------------------- 39.49s 2026-04-09 03:34:59.221186 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 29.36s 2026-04-09 03:34:59.221200 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.70s 2026-04-09 03:34:59.221224 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.43s 2026-04-09 03:34:59.619641 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.50s 2026-04-09 03:34:59.619733 | orchestrator | glance : Copying over config.json files for services -------------------- 4.50s 2026-04-09 03:34:59.619745 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.18s 2026-04-09 03:34:59.619767 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.05s 2026-04-09 03:34:59.619783 | orchestrator | glance : Check glance containers ---------------------------------------- 4.04s 2026-04-09 03:34:59.619792 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.01s 2026-04-09 03:34:59.619837 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.97s 2026-04-09 03:34:59.619845 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.84s 2026-04-09 03:34:59.619853 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.83s 2026-04-09 03:34:59.619860 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.82s 2026-04-09 03:34:59.619867 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.81s 2026-04-09 03:34:59.619875 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.76s 2026-04-09 03:34:59.619882 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.71s 2026-04-09 03:34:59.619889 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.60s 2026-04-09 03:34:59.619896 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.51s 2026-04-09 03:34:59.619903 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.42s 2026-04-09 03:35:02.292456 | orchestrator | 2026-04-09 03:35:02 | INFO  | Task 5365afb4-6515-4910-9d13-3c04278f2ae6 (cinder) was prepared for execution. 2026-04-09 03:35:02.292826 | orchestrator | 2026-04-09 03:35:02 | INFO  | It takes a moment until task 5365afb4-6515-4910-9d13-3c04278f2ae6 (cinder) has been started and output is visible here. 2026-04-09 03:35:37.849317 | orchestrator | 2026-04-09 03:35:37.849432 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 03:35:37.849446 | orchestrator | 2026-04-09 03:35:37.849454 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 03:35:37.849461 | orchestrator | Thursday 09 April 2026 03:35:06 +0000 (0:00:00.309) 0:00:00.309 ******** 2026-04-09 03:35:37.849469 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:35:37.849477 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:35:37.849484 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:35:37.849491 | orchestrator | 2026-04-09 03:35:37.849499 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 03:35:37.849506 | orchestrator | Thursday 09 April 2026 03:35:07 +0000 (0:00:00.330) 0:00:00.640 ******** 2026-04-09 03:35:37.849513 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-09 03:35:37.849521 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-09 03:35:37.849528 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-09 03:35:37.849536 | orchestrator | 2026-04-09 03:35:37.849543 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-09 03:35:37.849550 | orchestrator | 2026-04-09 03:35:37.849557 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-09 03:35:37.849564 | orchestrator | Thursday 09 April 2026 03:35:07 +0000 (0:00:00.486) 0:00:01.126 ******** 2026-04-09 03:35:37.849572 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:35:37.849579 | orchestrator | 2026-04-09 03:35:37.849586 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-04-09 03:35:37.849594 | orchestrator | Thursday 09 April 2026 03:35:08 +0000 (0:00:00.616) 0:00:01.743 ******** 2026-04-09 03:35:37.849601 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-04-09 03:35:37.849608 | orchestrator | 2026-04-09 03:35:37.849616 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-04-09 03:35:37.849624 | orchestrator | Thursday 09 April 2026 03:35:12 +0000 (0:00:03.620) 0:00:05.363 ******** 2026-04-09 03:35:37.849632 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-04-09 03:35:37.849640 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-04-09 03:35:37.849647 | orchestrator | 2026-04-09 03:35:37.849655 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-04-09 03:35:37.849686 | orchestrator | Thursday 09 April 2026 03:35:18 +0000 (0:00:06.321) 0:00:11.684 ******** 2026-04-09 03:35:37.849694 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 03:35:37.849701 | orchestrator | 2026-04-09 03:35:37.849708 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-04-09 03:35:37.849715 | orchestrator | Thursday 09 April 2026 03:35:21 +0000 (0:00:03.115) 0:00:14.800 ******** 2026-04-09 03:35:37.849722 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 03:35:37.849733 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-04-09 03:35:37.849749 | orchestrator | 2026-04-09 03:35:37.849766 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-04-09 03:35:37.849778 | orchestrator | Thursday 09 April 2026 03:35:25 +0000 (0:00:04.017) 0:00:18.818 ******** 2026-04-09 03:35:37.849789 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 03:35:37.849801 | orchestrator | 2026-04-09 03:35:37.849813 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-04-09 03:35:37.849824 | orchestrator | Thursday 09 April 2026 03:35:28 +0000 (0:00:03.077) 0:00:21.895 ******** 2026-04-09 03:35:37.849836 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-04-09 03:35:37.849847 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-04-09 03:35:37.849860 | orchestrator | 2026-04-09 03:35:37.849873 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-09 03:35:37.849884 | orchestrator | Thursday 09 April 2026 03:35:35 +0000 (0:00:07.317) 0:00:29.212 ******** 2026-04-09 03:35:37.849916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 03:35:37.849955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 03:35:37.849969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 03:35:37.849995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:35:37.850009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:35:37.850087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:35:37.850097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 03:35:37.850115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 03:35:44.134792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 03:35:44.134913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 03:35:44.134926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 03:35:44.134945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 03:35:44.134952 | orchestrator | 2026-04-09 03:35:44.134959 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-09 03:35:44.134966 | orchestrator | Thursday 09 April 2026 03:35:37 +0000 (0:00:02.045) 0:00:31.258 ******** 2026-04-09 03:35:44.134972 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:35:44.134980 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:35:44.134985 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:35:44.134990 | orchestrator | 2026-04-09 03:35:44.134995 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-09 03:35:44.135001 | orchestrator | Thursday 09 April 2026 03:35:38 +0000 (0:00:00.560) 0:00:31.818 ******** 2026-04-09 03:35:44.135007 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:35:44.135013 | orchestrator | 2026-04-09 03:35:44.135018 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-09 03:35:44.135024 | orchestrator | Thursday 09 April 2026 03:35:39 +0000 (0:00:00.587) 0:00:32.405 ******** 2026-04-09 03:35:44.135030 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-04-09 03:35:44.135037 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-04-09 03:35:44.135043 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-04-09 03:35:44.135049 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-04-09 03:35:44.135064 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-04-09 03:35:44.135070 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-04-09 03:35:44.135076 | orchestrator | 2026-04-09 03:35:44.135082 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-09 03:35:44.135088 | orchestrator | Thursday 09 April 2026 03:35:40 +0000 (0:00:01.729) 0:00:34.135 ******** 2026-04-09 03:35:44.135111 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-09 03:35:44.135120 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-09 03:35:44.135132 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-09 03:35:44.135139 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-09 03:35:44.135150 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-09 03:35:54.995786 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-09 03:35:54.995893 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-09 03:35:54.995938 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-09 03:35:54.995958 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-09 03:35:54.995975 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-09 03:35:54.996043 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-09 03:35:54.996064 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-09 03:35:54.996082 | orchestrator | 2026-04-09 03:35:54.996100 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-09 03:35:54.996118 | orchestrator | Thursday 09 April 2026 03:35:44 +0000 (0:00:03.704) 0:00:37.839 ******** 2026-04-09 03:35:54.996133 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-09 03:35:54.996151 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-09 03:35:54.996166 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-09 03:35:54.996182 | orchestrator | 2026-04-09 03:35:54.996198 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-09 03:35:54.996214 | orchestrator | Thursday 09 April 2026 03:35:46 +0000 (0:00:01.605) 0:00:39.444 ******** 2026-04-09 03:35:54.996231 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-04-09 03:35:54.996275 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-04-09 03:35:54.996290 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-04-09 03:35:54.996305 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-04-09 03:35:54.996332 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-04-09 03:35:54.996347 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-04-09 03:35:54.996362 | orchestrator | 2026-04-09 03:35:54.996379 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-09 03:35:54.996396 | orchestrator | Thursday 09 April 2026 03:35:48 +0000 (0:00:02.707) 0:00:42.152 ******** 2026-04-09 03:35:54.996412 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-09 03:35:54.996429 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-09 03:35:54.996460 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-09 03:35:54.996476 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-09 03:35:54.996490 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-09 03:35:54.996505 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-09 03:35:54.996521 | orchestrator | 2026-04-09 03:35:54.996537 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-09 03:35:54.996552 | orchestrator | Thursday 09 April 2026 03:35:49 +0000 (0:00:01.021) 0:00:43.174 ******** 2026-04-09 03:35:54.996568 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:35:54.996583 | orchestrator | 2026-04-09 03:35:54.996598 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-09 03:35:54.996613 | orchestrator | Thursday 09 April 2026 03:35:49 +0000 (0:00:00.132) 0:00:43.306 ******** 2026-04-09 03:35:54.996627 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:35:54.996642 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:35:54.996657 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:35:54.996672 | orchestrator | 2026-04-09 03:35:54.996687 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-09 03:35:54.996702 | orchestrator | Thursday 09 April 2026 03:35:50 +0000 (0:00:00.572) 0:00:43.879 ******** 2026-04-09 03:35:54.996719 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:35:54.996734 | orchestrator | 2026-04-09 03:35:54.996748 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-09 03:35:54.996764 | orchestrator | Thursday 09 April 2026 03:35:51 +0000 (0:00:00.600) 0:00:44.480 ******** 2026-04-09 03:35:54.996792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 03:35:55.971299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 03:35:55.971400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 03:35:55.971430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:35:55.971440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:35:55.971448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:35:55.971471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 03:35:55.971481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 03:35:55.971495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 03:35:55.971509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 03:35:55.971517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 03:35:55.971525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 03:35:55.971533 | orchestrator | 2026-04-09 03:35:55.971542 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-09 03:35:55.971551 | orchestrator | Thursday 09 April 2026 03:35:55 +0000 (0:00:03.936) 0:00:48.417 ******** 2026-04-09 03:35:55.971565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-09 03:35:56.079770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:35:56.079892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 03:35:56.079912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 03:35:56.079920 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:35:56.079928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-09 03:35:56.079936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:35:56.079956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 03:35:56.079973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 03:35:56.079986 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:35:56.080000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-09 03:35:56.080016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:35:56.080027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 03:35:56.080038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 03:35:56.080057 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:35:56.080067 | orchestrator | 2026-04-09 03:35:56.080077 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-09 03:35:56.080095 | orchestrator | Thursday 09 April 2026 03:35:56 +0000 (0:00:00.964) 0:00:49.382 ******** 2026-04-09 03:35:56.680111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-09 03:35:56.680191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:35:56.680198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 03:35:56.680204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 03:35:56.680209 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:35:56.680214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-09 03:35:56.680250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:35:56.680302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 03:35:56.680306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 03:35:56.680310 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:35:56.680314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-09 03:35:56.680318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:35:56.680330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 03:36:01.445842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 03:36:01.445930 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:36:01.445938 | orchestrator | 2026-04-09 03:36:01.445955 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-09 03:36:01.445960 | orchestrator | Thursday 09 April 2026 03:35:56 +0000 (0:00:00.914) 0:00:50.296 ******** 2026-04-09 03:36:01.446008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 03:36:01.446714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 03:36:01.446730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 03:36:01.446781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:36:01.446792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:36:01.446807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:36:01.446815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 03:36:01.446822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 03:36:01.446829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 03:36:01.446848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 03:36:15.250500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 03:36:15.250617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 03:36:15.250635 | orchestrator | 2026-04-09 03:36:15.250649 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-09 03:36:15.250662 | orchestrator | Thursday 09 April 2026 03:36:01 +0000 (0:00:04.558) 0:00:54.855 ******** 2026-04-09 03:36:15.250672 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-09 03:36:15.250684 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-09 03:36:15.250695 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-09 03:36:15.250705 | orchestrator | 2026-04-09 03:36:15.250716 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-09 03:36:15.250726 | orchestrator | Thursday 09 April 2026 03:36:03 +0000 (0:00:01.846) 0:00:56.702 ******** 2026-04-09 03:36:15.250738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 03:36:15.250776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 03:36:15.250814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 03:36:15.250825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:36:15.250835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:36:15.250846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:36:15.250864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 03:36:15.250875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 03:36:15.250894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 03:36:17.875336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 03:36:17.875573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 03:36:17.875601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 03:36:17.876552 | orchestrator | 2026-04-09 03:36:17.876581 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-09 03:36:17.876593 | orchestrator | Thursday 09 April 2026 03:36:15 +0000 (0:00:11.949) 0:01:08.651 ******** 2026-04-09 03:36:17.876604 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:36:17.876616 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:36:17.876627 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:36:17.876638 | orchestrator | 2026-04-09 03:36:17.876649 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-09 03:36:17.876660 | orchestrator | Thursday 09 April 2026 03:36:16 +0000 (0:00:01.597) 0:01:10.249 ******** 2026-04-09 03:36:17.876673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-09 03:36:17.876686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:36:17.876731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 03:36:17.876745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 03:36:17.876770 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:36:17.876782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-09 03:36:17.876793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:36:17.876805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 03:36:17.876831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 03:36:21.576048 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:36:21.576151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-09 03:36:21.576199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:36:21.576229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 03:36:21.576243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 03:36:21.576255 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:36:21.576275 | orchestrator | 2026-04-09 03:36:21.576283 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-09 03:36:21.576290 | orchestrator | Thursday 09 April 2026 03:36:17 +0000 (0:00:01.052) 0:01:11.301 ******** 2026-04-09 03:36:21.576297 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:36:21.576303 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:36:21.576309 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:36:21.576315 | orchestrator | 2026-04-09 03:36:21.576321 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-04-09 03:36:21.576328 | orchestrator | Thursday 09 April 2026 03:36:18 +0000 (0:00:00.677) 0:01:11.979 ******** 2026-04-09 03:36:21.576363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 03:36:21.576428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 03:36:21.576437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 03:36:21.576444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:36:21.576451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:36:21.576462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:36:21.576476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 03:38:02.729517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 03:38:02.729620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 03:38:02.729634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 03:38:02.729645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 03:38:02.729670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 03:38:02.729722 | orchestrator | 2026-04-09 03:38:02.729744 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-09 03:38:02.729759 | orchestrator | Thursday 09 April 2026 03:36:21 +0000 (0:00:03.014) 0:01:14.993 ******** 2026-04-09 03:38:02.729774 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:38:02.729856 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:38:02.729876 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:38:02.729890 | orchestrator | 2026-04-09 03:38:02.729906 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-04-09 03:38:02.729920 | orchestrator | Thursday 09 April 2026 03:36:22 +0000 (0:00:00.333) 0:01:15.326 ******** 2026-04-09 03:38:02.729935 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:38:02.729950 | orchestrator | 2026-04-09 03:38:02.730005 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-04-09 03:38:02.730093 | orchestrator | Thursday 09 April 2026 03:36:24 +0000 (0:00:02.109) 0:01:17.435 ******** 2026-04-09 03:38:02.730111 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:38:02.730127 | orchestrator | 2026-04-09 03:38:02.730137 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-09 03:38:02.730148 | orchestrator | Thursday 09 April 2026 03:36:26 +0000 (0:00:02.229) 0:01:19.665 ******** 2026-04-09 03:38:02.730158 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:38:02.730168 | orchestrator | 2026-04-09 03:38:02.730179 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-09 03:38:02.730189 | orchestrator | Thursday 09 April 2026 03:36:46 +0000 (0:00:19.923) 0:01:39.588 ******** 2026-04-09 03:38:02.730199 | orchestrator | 2026-04-09 03:38:02.730209 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-09 03:38:02.730219 | orchestrator | Thursday 09 April 2026 03:36:46 +0000 (0:00:00.070) 0:01:39.659 ******** 2026-04-09 03:38:02.730229 | orchestrator | 2026-04-09 03:38:02.730239 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-09 03:38:02.730250 | orchestrator | Thursday 09 April 2026 03:36:46 +0000 (0:00:00.084) 0:01:39.744 ******** 2026-04-09 03:38:02.730260 | orchestrator | 2026-04-09 03:38:02.730270 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-09 03:38:02.730281 | orchestrator | Thursday 09 April 2026 03:36:46 +0000 (0:00:00.077) 0:01:39.822 ******** 2026-04-09 03:38:02.730291 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:38:02.730302 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:38:02.730312 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:38:02.730322 | orchestrator | 2026-04-09 03:38:02.730333 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-09 03:38:02.730343 | orchestrator | Thursday 09 April 2026 03:37:17 +0000 (0:00:31.139) 0:02:10.961 ******** 2026-04-09 03:38:02.730353 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:38:02.730364 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:38:02.730374 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:38:02.730383 | orchestrator | 2026-04-09 03:38:02.730394 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-09 03:38:02.730404 | orchestrator | Thursday 09 April 2026 03:37:28 +0000 (0:00:10.368) 0:02:21.329 ******** 2026-04-09 03:38:02.730415 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:38:02.730425 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:38:02.730435 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:38:02.730445 | orchestrator | 2026-04-09 03:38:02.730456 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-09 03:38:02.730466 | orchestrator | Thursday 09 April 2026 03:37:56 +0000 (0:00:28.141) 0:02:49.471 ******** 2026-04-09 03:38:02.730476 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:38:02.730485 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:38:02.730520 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:38:02.730541 | orchestrator | 2026-04-09 03:38:02.730556 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-09 03:38:02.730571 | orchestrator | Thursday 09 April 2026 03:38:02 +0000 (0:00:06.258) 0:02:55.730 ******** 2026-04-09 03:38:02.730584 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:38:02.730598 | orchestrator | 2026-04-09 03:38:02.730612 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 03:38:02.730627 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-09 03:38:02.730644 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 03:38:02.730657 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 03:38:02.730671 | orchestrator | 2026-04-09 03:38:02.730686 | orchestrator | 2026-04-09 03:38:02.730699 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 03:38:02.730708 | orchestrator | Thursday 09 April 2026 03:38:02 +0000 (0:00:00.283) 0:02:56.013 ******** 2026-04-09 03:38:02.730716 | orchestrator | =============================================================================== 2026-04-09 03:38:02.730725 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 31.14s 2026-04-09 03:38:02.730734 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 28.14s 2026-04-09 03:38:02.730742 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.92s 2026-04-09 03:38:02.730751 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.95s 2026-04-09 03:38:02.730768 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.37s 2026-04-09 03:38:02.730777 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.32s 2026-04-09 03:38:02.730786 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.32s 2026-04-09 03:38:02.730823 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 6.26s 2026-04-09 03:38:02.730836 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.56s 2026-04-09 03:38:02.730849 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.02s 2026-04-09 03:38:02.730858 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.94s 2026-04-09 03:38:02.730866 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.70s 2026-04-09 03:38:02.730875 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.62s 2026-04-09 03:38:02.730884 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.12s 2026-04-09 03:38:02.730904 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.08s 2026-04-09 03:38:03.186898 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.01s 2026-04-09 03:38:03.186980 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.71s 2026-04-09 03:38:03.186987 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.23s 2026-04-09 03:38:03.186992 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.11s 2026-04-09 03:38:03.186997 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.05s 2026-04-09 03:38:05.837317 | orchestrator | 2026-04-09 03:38:05 | INFO  | Task 5c74492f-ab26-4e1c-abec-13c1dbb6da17 (barbican) was prepared for execution. 2026-04-09 03:38:05.837423 | orchestrator | 2026-04-09 03:38:05 | INFO  | It takes a moment until task 5c74492f-ab26-4e1c-abec-13c1dbb6da17 (barbican) has been started and output is visible here. 2026-04-09 03:38:50.253298 | orchestrator | 2026-04-09 03:38:50.253405 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 03:38:50.253446 | orchestrator | 2026-04-09 03:38:50.253456 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 03:38:50.253464 | orchestrator | Thursday 09 April 2026 03:38:10 +0000 (0:00:00.281) 0:00:00.281 ******** 2026-04-09 03:38:50.253473 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:38:50.253483 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:38:50.253492 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:38:50.253502 | orchestrator | 2026-04-09 03:38:50.253510 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 03:38:50.253519 | orchestrator | Thursday 09 April 2026 03:38:10 +0000 (0:00:00.327) 0:00:00.609 ******** 2026-04-09 03:38:50.253528 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-09 03:38:50.253537 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-09 03:38:50.253547 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-09 03:38:50.253554 | orchestrator | 2026-04-09 03:38:50.253559 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-09 03:38:50.253564 | orchestrator | 2026-04-09 03:38:50.253569 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-09 03:38:50.253574 | orchestrator | Thursday 09 April 2026 03:38:11 +0000 (0:00:00.532) 0:00:01.141 ******** 2026-04-09 03:38:50.253580 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:38:50.253587 | orchestrator | 2026-04-09 03:38:50.253592 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-04-09 03:38:50.253597 | orchestrator | Thursday 09 April 2026 03:38:12 +0000 (0:00:00.637) 0:00:01.778 ******** 2026-04-09 03:38:50.253603 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-04-09 03:38:50.253608 | orchestrator | 2026-04-09 03:38:50.253613 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-04-09 03:38:50.253618 | orchestrator | Thursday 09 April 2026 03:38:15 +0000 (0:00:03.592) 0:00:05.370 ******** 2026-04-09 03:38:50.253623 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-04-09 03:38:50.253629 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-04-09 03:38:50.253634 | orchestrator | 2026-04-09 03:38:50.253639 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-04-09 03:38:50.253644 | orchestrator | Thursday 09 April 2026 03:38:22 +0000 (0:00:06.438) 0:00:11.809 ******** 2026-04-09 03:38:50.253649 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 03:38:50.253654 | orchestrator | 2026-04-09 03:38:50.253659 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-04-09 03:38:50.253664 | orchestrator | Thursday 09 April 2026 03:38:25 +0000 (0:00:03.232) 0:00:15.041 ******** 2026-04-09 03:38:50.253669 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 03:38:50.253674 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-04-09 03:38:50.253679 | orchestrator | 2026-04-09 03:38:50.253684 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-04-09 03:38:50.253689 | orchestrator | Thursday 09 April 2026 03:38:29 +0000 (0:00:04.039) 0:00:19.081 ******** 2026-04-09 03:38:50.253695 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 03:38:50.253700 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-04-09 03:38:50.253717 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-04-09 03:38:50.253722 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-04-09 03:38:50.253727 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-04-09 03:38:50.253732 | orchestrator | 2026-04-09 03:38:50.253737 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-04-09 03:38:50.253742 | orchestrator | Thursday 09 April 2026 03:38:44 +0000 (0:00:15.335) 0:00:34.416 ******** 2026-04-09 03:38:50.253753 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-04-09 03:38:50.253759 | orchestrator | 2026-04-09 03:38:50.253764 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-09 03:38:50.253769 | orchestrator | Thursday 09 April 2026 03:38:48 +0000 (0:00:03.869) 0:00:38.285 ******** 2026-04-09 03:38:50.253777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 03:38:50.253800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 03:38:50.253806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 03:38:50.253812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:38:50.253824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:38:50.253833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:38:50.253844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:38:56.179728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:38:56.179836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:38:56.179852 | orchestrator | 2026-04-09 03:38:56.179865 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-09 03:38:56.179877 | orchestrator | Thursday 09 April 2026 03:38:50 +0000 (0:00:01.666) 0:00:39.952 ******** 2026-04-09 03:38:56.179887 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-09 03:38:56.179897 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-09 03:38:56.179907 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-09 03:38:56.179916 | orchestrator | 2026-04-09 03:38:56.179926 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-09 03:38:56.179936 | orchestrator | Thursday 09 April 2026 03:38:51 +0000 (0:00:01.216) 0:00:41.168 ******** 2026-04-09 03:38:56.179946 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:38:56.179956 | orchestrator | 2026-04-09 03:38:56.180043 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-09 03:38:56.180055 | orchestrator | Thursday 09 April 2026 03:38:51 +0000 (0:00:00.367) 0:00:41.536 ******** 2026-04-09 03:38:56.180065 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:38:56.180075 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:38:56.180084 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:38:56.180094 | orchestrator | 2026-04-09 03:38:56.180103 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-09 03:38:56.180113 | orchestrator | Thursday 09 April 2026 03:38:52 +0000 (0:00:00.324) 0:00:41.861 ******** 2026-04-09 03:38:56.180138 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:38:56.180148 | orchestrator | 2026-04-09 03:38:56.180158 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-09 03:38:56.180167 | orchestrator | Thursday 09 April 2026 03:38:52 +0000 (0:00:00.586) 0:00:42.447 ******** 2026-04-09 03:38:56.180179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 03:38:56.180210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 03:38:56.180222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 03:38:56.180233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:38:56.180264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:38:56.180288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:38:56.180310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:38:56.180340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:38:57.669800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:38:57.669883 | orchestrator | 2026-04-09 03:38:57.669891 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-09 03:38:57.669898 | orchestrator | Thursday 09 April 2026 03:38:56 +0000 (0:00:03.427) 0:00:45.875 ******** 2026-04-09 03:38:57.669924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-09 03:38:57.669941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 03:38:57.669947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:38:57.669951 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:38:57.669957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-09 03:38:57.669999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 03:38:57.670007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:38:57.670062 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:38:57.670071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-09 03:38:57.670075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 03:38:57.670080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:38:57.670084 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:38:57.670088 | orchestrator | 2026-04-09 03:38:57.670093 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-09 03:38:57.670097 | orchestrator | Thursday 09 April 2026 03:38:56 +0000 (0:00:00.653) 0:00:46.529 ******** 2026-04-09 03:38:57.670107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-09 03:39:01.145012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 03:39:01.145123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:39:01.145143 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:39:01.145180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-09 03:39:01.145197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 03:39:01.145209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:39:01.145217 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:39:01.145247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-09 03:39:01.145290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 03:39:01.145312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:39:01.145325 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:39:01.145339 | orchestrator | 2026-04-09 03:39:01.145353 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-09 03:39:01.145368 | orchestrator | Thursday 09 April 2026 03:38:57 +0000 (0:00:00.849) 0:00:47.379 ******** 2026-04-09 03:39:01.145380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 03:39:01.145394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 03:39:01.145430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 03:39:11.270609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:39:11.270744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:39:11.270764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:39:11.270784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:39:11.270806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:39:11.270859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:39:11.270880 | orchestrator | 2026-04-09 03:39:11.270902 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-09 03:39:11.270925 | orchestrator | Thursday 09 April 2026 03:39:01 +0000 (0:00:03.468) 0:00:50.847 ******** 2026-04-09 03:39:11.270945 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:39:11.270970 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:39:11.270990 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:39:11.271044 | orchestrator | 2026-04-09 03:39:11.271078 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-09 03:39:11.271090 | orchestrator | Thursday 09 April 2026 03:39:02 +0000 (0:00:01.602) 0:00:52.450 ******** 2026-04-09 03:39:11.271102 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 03:39:11.271113 | orchestrator | 2026-04-09 03:39:11.271124 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-09 03:39:11.271138 | orchestrator | Thursday 09 April 2026 03:39:03 +0000 (0:00:01.000) 0:00:53.451 ******** 2026-04-09 03:39:11.271151 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:39:11.271164 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:39:11.271177 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:39:11.271188 | orchestrator | 2026-04-09 03:39:11.271201 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-09 03:39:11.271214 | orchestrator | Thursday 09 April 2026 03:39:04 +0000 (0:00:00.603) 0:00:54.055 ******** 2026-04-09 03:39:11.271236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 03:39:11.271251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 03:39:11.271277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 03:39:11.271510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:39:12.190924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:39:12.191054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:39:12.191070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:39:12.191211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:39:12.191233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:39:12.191243 | orchestrator | 2026-04-09 03:39:12.191256 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-09 03:39:12.191267 | orchestrator | Thursday 09 April 2026 03:39:11 +0000 (0:00:06.916) 0:01:00.971 ******** 2026-04-09 03:39:12.191296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-09 03:39:12.191313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 03:39:12.191324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:39:12.191334 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:39:12.191361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-09 03:39:12.191395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 03:39:12.191406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:39:12.191415 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:39:12.191434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-09 03:39:14.593846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 03:39:14.593951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:39:14.594003 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:39:14.594166 | orchestrator | 2026-04-09 03:39:14.594181 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-04-09 03:39:14.594191 | orchestrator | Thursday 09 April 2026 03:39:12 +0000 (0:00:00.924) 0:01:01.896 ******** 2026-04-09 03:39:14.594200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 03:39:14.594210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 03:39:14.594236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 03:39:14.594256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:39:14.594291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:39:14.594310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:39:14.594324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:39:14.594339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:39:14.594352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:39:14.594367 | orchestrator | 2026-04-09 03:39:14.594383 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-09 03:39:14.594407 | orchestrator | Thursday 09 April 2026 03:39:14 +0000 (0:00:02.392) 0:01:04.288 ******** 2026-04-09 03:39:59.455707 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:39:59.455815 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:39:59.455829 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:39:59.455839 | orchestrator | 2026-04-09 03:39:59.455865 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-04-09 03:39:59.455901 | orchestrator | Thursday 09 April 2026 03:39:14 +0000 (0:00:00.357) 0:01:04.646 ******** 2026-04-09 03:39:59.455911 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:39:59.455920 | orchestrator | 2026-04-09 03:39:59.455928 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-04-09 03:39:59.455937 | orchestrator | Thursday 09 April 2026 03:39:17 +0000 (0:00:02.097) 0:01:06.743 ******** 2026-04-09 03:39:59.455946 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:39:59.455954 | orchestrator | 2026-04-09 03:39:59.455963 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-09 03:39:59.455972 | orchestrator | Thursday 09 April 2026 03:39:19 +0000 (0:00:02.192) 0:01:08.936 ******** 2026-04-09 03:39:59.455981 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:39:59.455989 | orchestrator | 2026-04-09 03:39:59.455998 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-09 03:39:59.456006 | orchestrator | Thursday 09 April 2026 03:39:31 +0000 (0:00:12.702) 0:01:21.638 ******** 2026-04-09 03:39:59.456015 | orchestrator | 2026-04-09 03:39:59.456024 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-09 03:39:59.456032 | orchestrator | Thursday 09 April 2026 03:39:31 +0000 (0:00:00.075) 0:01:21.713 ******** 2026-04-09 03:39:59.456041 | orchestrator | 2026-04-09 03:39:59.456049 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-09 03:39:59.456058 | orchestrator | Thursday 09 April 2026 03:39:32 +0000 (0:00:00.092) 0:01:21.806 ******** 2026-04-09 03:39:59.456066 | orchestrator | 2026-04-09 03:39:59.456075 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-09 03:39:59.456084 | orchestrator | Thursday 09 April 2026 03:39:32 +0000 (0:00:00.091) 0:01:21.898 ******** 2026-04-09 03:39:59.456092 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:39:59.456101 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:39:59.456110 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:39:59.456149 | orchestrator | 2026-04-09 03:39:59.456157 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-09 03:39:59.456166 | orchestrator | Thursday 09 April 2026 03:39:43 +0000 (0:00:11.430) 0:01:33.329 ******** 2026-04-09 03:39:59.456174 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:39:59.456183 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:39:59.456192 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:39:59.456201 | orchestrator | 2026-04-09 03:39:59.456209 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-09 03:39:59.456325 | orchestrator | Thursday 09 April 2026 03:39:48 +0000 (0:00:05.086) 0:01:38.415 ******** 2026-04-09 03:39:59.456336 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:39:59.456347 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:39:59.456357 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:39:59.456367 | orchestrator | 2026-04-09 03:39:59.456377 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 03:39:59.456388 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 03:39:59.456399 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 03:39:59.456410 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 03:39:59.456419 | orchestrator | 2026-04-09 03:39:59.456430 | orchestrator | 2026-04-09 03:39:59.456440 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 03:39:59.456450 | orchestrator | Thursday 09 April 2026 03:39:59 +0000 (0:00:10.340) 0:01:48.756 ******** 2026-04-09 03:39:59.456460 | orchestrator | =============================================================================== 2026-04-09 03:39:59.456470 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.34s 2026-04-09 03:39:59.456490 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.70s 2026-04-09 03:39:59.456500 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.43s 2026-04-09 03:39:59.456510 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.34s 2026-04-09 03:39:59.456520 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.92s 2026-04-09 03:39:59.456530 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.44s 2026-04-09 03:39:59.456540 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 5.09s 2026-04-09 03:39:59.456549 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.04s 2026-04-09 03:39:59.456559 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.87s 2026-04-09 03:39:59.456569 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.59s 2026-04-09 03:39:59.456579 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.47s 2026-04-09 03:39:59.456589 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.43s 2026-04-09 03:39:59.456602 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.23s 2026-04-09 03:39:59.456617 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.39s 2026-04-09 03:39:59.456632 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.19s 2026-04-09 03:39:59.456667 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.10s 2026-04-09 03:39:59.456683 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.67s 2026-04-09 03:39:59.456706 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.60s 2026-04-09 03:39:59.456721 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.22s 2026-04-09 03:39:59.456737 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.00s 2026-04-09 03:40:02.072744 | orchestrator | 2026-04-09 03:40:02 | INFO  | Task b1d82d94-0575-4102-89ed-5325fe0fb59a (designate) was prepared for execution. 2026-04-09 03:40:02.072859 | orchestrator | 2026-04-09 03:40:02 | INFO  | It takes a moment until task b1d82d94-0575-4102-89ed-5325fe0fb59a (designate) has been started and output is visible here. 2026-04-09 03:40:34.810789 | orchestrator | 2026-04-09 03:40:34.810867 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 03:40:34.810873 | orchestrator | 2026-04-09 03:40:34.810878 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 03:40:34.810882 | orchestrator | Thursday 09 April 2026 03:40:06 +0000 (0:00:00.294) 0:00:00.294 ******** 2026-04-09 03:40:34.810886 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:40:34.810891 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:40:34.810896 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:40:34.810899 | orchestrator | 2026-04-09 03:40:34.810903 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 03:40:34.810907 | orchestrator | Thursday 09 April 2026 03:40:06 +0000 (0:00:00.311) 0:00:00.606 ******** 2026-04-09 03:40:34.810912 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-09 03:40:34.810916 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-09 03:40:34.810920 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-09 03:40:34.810924 | orchestrator | 2026-04-09 03:40:34.810928 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-09 03:40:34.810931 | orchestrator | 2026-04-09 03:40:34.810935 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-09 03:40:34.810939 | orchestrator | Thursday 09 April 2026 03:40:07 +0000 (0:00:00.474) 0:00:01.080 ******** 2026-04-09 03:40:34.810943 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:40:34.810966 | orchestrator | 2026-04-09 03:40:34.810970 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-04-09 03:40:34.810974 | orchestrator | Thursday 09 April 2026 03:40:08 +0000 (0:00:00.592) 0:00:01.673 ******** 2026-04-09 03:40:34.810978 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-04-09 03:40:34.810982 | orchestrator | 2026-04-09 03:40:34.810985 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-04-09 03:40:34.810989 | orchestrator | Thursday 09 April 2026 03:40:11 +0000 (0:00:03.986) 0:00:05.659 ******** 2026-04-09 03:40:34.810993 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-04-09 03:40:34.810997 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-04-09 03:40:34.811001 | orchestrator | 2026-04-09 03:40:34.811004 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-04-09 03:40:34.811008 | orchestrator | Thursday 09 April 2026 03:40:18 +0000 (0:00:06.551) 0:00:12.210 ******** 2026-04-09 03:40:34.811012 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 03:40:34.811016 | orchestrator | 2026-04-09 03:40:34.811020 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-04-09 03:40:34.811024 | orchestrator | Thursday 09 April 2026 03:40:21 +0000 (0:00:03.313) 0:00:15.523 ******** 2026-04-09 03:40:34.811027 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 03:40:34.811031 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-04-09 03:40:34.811035 | orchestrator | 2026-04-09 03:40:34.811039 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-04-09 03:40:34.811042 | orchestrator | Thursday 09 April 2026 03:40:25 +0000 (0:00:04.004) 0:00:19.528 ******** 2026-04-09 03:40:34.811046 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 03:40:34.811050 | orchestrator | 2026-04-09 03:40:34.811054 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-04-09 03:40:34.811058 | orchestrator | Thursday 09 April 2026 03:40:28 +0000 (0:00:03.125) 0:00:22.654 ******** 2026-04-09 03:40:34.811062 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-04-09 03:40:34.811065 | orchestrator | 2026-04-09 03:40:34.811069 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-09 03:40:34.811073 | orchestrator | Thursday 09 April 2026 03:40:32 +0000 (0:00:03.787) 0:00:26.442 ******** 2026-04-09 03:40:34.811089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 03:40:34.811107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 03:40:34.811116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 03:40:34.811122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 03:40:34.811127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 03:40:34.811131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 03:40:34.811138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:34.811147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:41.041579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:41.041663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:41.041673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:41.041678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:41.041683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:41.041701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:41.041753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:41.041764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:41.041777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:41.041785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:41.041793 | orchestrator | 2026-04-09 03:40:41.041801 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-09 03:40:41.041811 | orchestrator | Thursday 09 April 2026 03:40:35 +0000 (0:00:02.763) 0:00:29.205 ******** 2026-04-09 03:40:41.041818 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:40:41.041828 | orchestrator | 2026-04-09 03:40:41.041835 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-09 03:40:41.041842 | orchestrator | Thursday 09 April 2026 03:40:35 +0000 (0:00:00.131) 0:00:29.336 ******** 2026-04-09 03:40:41.041849 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:40:41.041857 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:40:41.041864 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:40:41.041871 | orchestrator | 2026-04-09 03:40:41.041879 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-09 03:40:41.041887 | orchestrator | Thursday 09 April 2026 03:40:36 +0000 (0:00:00.555) 0:00:29.892 ******** 2026-04-09 03:40:41.041895 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:40:41.041901 | orchestrator | 2026-04-09 03:40:41.041905 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-09 03:40:41.041917 | orchestrator | Thursday 09 April 2026 03:40:36 +0000 (0:00:00.575) 0:00:30.468 ******** 2026-04-09 03:40:41.041928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 03:40:41.041942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 03:40:42.861057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 03:40:42.861166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 03:40:42.861179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 03:40:42.861293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 03:40:42.861306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:42.861332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:42.861339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:42.861346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:42.861355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:42.861362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:42.861381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:42.861388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:42.861403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:43.864975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:43.865078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:43.865091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:43.865127 | orchestrator | 2026-04-09 03:40:43.865140 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-09 03:40:43.865152 | orchestrator | Thursday 09 April 2026 03:40:42 +0000 (0:00:06.049) 0:00:36.518 ******** 2026-04-09 03:40:43.865178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 03:40:43.865192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 03:40:43.865244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 03:40:43.865258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 03:40:43.865269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 03:40:43.865288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:40:43.865298 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:40:43.865315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 03:40:43.865325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 03:40:43.865336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 03:40:43.865352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 03:40:44.855178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 03:40:44.855330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:40:44.855342 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:40:44.855364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 03:40:44.855373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 03:40:44.855381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 03:40:44.855388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 03:40:44.855409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 03:40:44.855429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:40:44.855435 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:40:44.855442 | orchestrator | 2026-04-09 03:40:44.855449 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-09 03:40:44.855457 | orchestrator | Thursday 09 April 2026 03:40:43 +0000 (0:00:01.132) 0:00:37.650 ******** 2026-04-09 03:40:44.855467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 03:40:44.855475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 03:40:44.855481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 03:40:44.855492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 03:40:45.245342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 03:40:45.245427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:40:45.245439 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:40:45.245464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 03:40:45.245473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 03:40:45.245480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 03:40:45.245485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 03:40:45.245517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 03:40:45.245521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:40:45.245525 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:40:45.245532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 03:40:45.245536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 03:40:45.245540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 03:40:45.245544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 03:40:45.245555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 03:40:49.750570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:40:49.750652 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:40:49.750660 | orchestrator | 2026-04-09 03:40:49.750666 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-09 03:40:49.750675 | orchestrator | Thursday 09 April 2026 03:40:45 +0000 (0:00:01.248) 0:00:38.899 ******** 2026-04-09 03:40:49.750697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 03:40:49.750705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 03:40:49.750713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 03:40:49.750754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 03:40:49.750764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 03:40:49.750772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 03:40:49.750777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:49.750782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:49.750787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:49.750797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 03:40:49.750807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:01.594836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:01.594961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:01.594978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:01.594989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:01.595036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:01.595048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:01.595075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:01.595087 | orchestrator | 2026-04-09 03:41:01.595099 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-09 03:41:01.595110 | orchestrator | Thursday 09 April 2026 03:40:51 +0000 (0:00:06.311) 0:00:45.210 ******** 2026-04-09 03:41:01.595127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 03:41:01.595139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 03:41:01.595157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 03:41:01.595168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 03:41:01.595188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 03:41:10.405016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 03:41:10.405125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:10.405134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:10.405156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:10.405162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:10.405168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:10.405183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:10.405191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:10.405196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:10.405201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:10.405210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:10.405214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:10.405250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:10.405256 | orchestrator | 2026-04-09 03:41:10.405309 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-09 03:41:10.405316 | orchestrator | Thursday 09 April 2026 03:41:06 +0000 (0:00:14.781) 0:00:59.992 ******** 2026-04-09 03:41:10.405325 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-09 03:41:15.008229 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-09 03:41:15.008418 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-09 03:41:15.008432 | orchestrator | 2026-04-09 03:41:15.008441 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-09 03:41:15.008449 | orchestrator | Thursday 09 April 2026 03:41:10 +0000 (0:00:04.065) 0:01:04.058 ******** 2026-04-09 03:41:15.008456 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-09 03:41:15.008464 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-09 03:41:15.008471 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-09 03:41:15.008479 | orchestrator | 2026-04-09 03:41:15.008486 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-09 03:41:15.008511 | orchestrator | Thursday 09 April 2026 03:41:13 +0000 (0:00:02.712) 0:01:06.770 ******** 2026-04-09 03:41:15.008522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 03:41:15.008557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 03:41:15.008566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 03:41:15.008590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 03:41:15.008601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 03:41:15.008613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 03:41:15.008631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 03:41:15.008639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 03:41:15.008647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 03:41:15.008653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 03:41:15.008665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 03:41:17.961244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 03:41:17.961388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 03:41:17.961403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 03:41:17.961411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 03:41:17.961418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:17.961425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:17.961449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:17.961464 | orchestrator | 2026-04-09 03:41:17.961473 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-09 03:41:17.961482 | orchestrator | Thursday 09 April 2026 03:41:16 +0000 (0:00:03.041) 0:01:09.812 ******** 2026-04-09 03:41:17.961496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 03:41:17.961504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 03:41:17.961508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 03:41:17.961513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 03:41:17.961522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 03:41:19.020239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 03:41:19.020441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 03:41:19.020461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 03:41:19.020473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 03:41:19.020485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 03:41:19.020506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 03:41:19.020575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 03:41:19.020587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 03:41:19.020598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 03:41:19.020608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 03:41:19.020618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:19.020630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:19.020640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:19.020658 | orchestrator | 2026-04-09 03:41:19.020671 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-09 03:41:19.020690 | orchestrator | Thursday 09 April 2026 03:41:19 +0000 (0:00:02.860) 0:01:12.673 ******** 2026-04-09 03:41:20.089222 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:41:20.089374 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:41:20.089386 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:41:20.089395 | orchestrator | 2026-04-09 03:41:20.089404 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-09 03:41:20.089412 | orchestrator | Thursday 09 April 2026 03:41:19 +0000 (0:00:00.383) 0:01:13.056 ******** 2026-04-09 03:41:20.089438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 03:41:20.089450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 03:41:20.089459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 03:41:20.089469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 03:41:20.089495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 03:41:20.089519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:41:20.089527 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:41:20.089539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 03:41:20.089547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 03:41:20.089555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 03:41:20.089563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 03:41:20.089575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 03:41:20.089588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:41:23.547522 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:41:23.547638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 03:41:23.547655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 03:41:23.547667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 03:41:23.547678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 03:41:23.547718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 03:41:23.547728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:41:23.547737 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:41:23.547747 | orchestrator | 2026-04-09 03:41:23.547770 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-04-09 03:41:23.547781 | orchestrator | Thursday 09 April 2026 03:41:20 +0000 (0:00:00.824) 0:01:13.881 ******** 2026-04-09 03:41:23.547795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 03:41:23.547806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 03:41:23.547815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 03:41:23.547832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 03:41:23.547847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 03:41:25.332158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 03:41:25.332257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:25.332267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:25.332273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:25.332345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:25.332354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:25.332378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:25.332385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:25.332392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:25.332397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:25.332408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:25.332414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:25.332419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:41:25.332425 | orchestrator | 2026-04-09 03:41:25.332432 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-09 03:41:25.332439 | orchestrator | Thursday 09 April 2026 03:41:25 +0000 (0:00:04.786) 0:01:18.667 ******** 2026-04-09 03:41:25.332445 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:41:25.332455 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:42:46.918784 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:42:46.918869 | orchestrator | 2026-04-09 03:42:46.918878 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-04-09 03:42:46.918897 | orchestrator | Thursday 09 April 2026 03:41:25 +0000 (0:00:00.321) 0:01:18.989 ******** 2026-04-09 03:42:46.918903 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-04-09 03:42:46.918908 | orchestrator | 2026-04-09 03:42:46.918913 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-04-09 03:42:46.918917 | orchestrator | Thursday 09 April 2026 03:41:27 +0000 (0:00:02.177) 0:01:21.166 ******** 2026-04-09 03:42:46.918922 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 03:42:46.918927 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-04-09 03:42:46.918932 | orchestrator | 2026-04-09 03:42:46.918937 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-09 03:42:46.918941 | orchestrator | Thursday 09 April 2026 03:41:29 +0000 (0:00:02.247) 0:01:23.413 ******** 2026-04-09 03:42:46.918946 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:42:46.918950 | orchestrator | 2026-04-09 03:42:46.918955 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-09 03:42:46.918960 | orchestrator | Thursday 09 April 2026 03:41:45 +0000 (0:00:15.994) 0:01:39.407 ******** 2026-04-09 03:42:46.918964 | orchestrator | 2026-04-09 03:42:46.918969 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-09 03:42:46.918973 | orchestrator | Thursday 09 April 2026 03:41:45 +0000 (0:00:00.079) 0:01:39.487 ******** 2026-04-09 03:42:46.918997 | orchestrator | 2026-04-09 03:42:46.919002 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-09 03:42:46.919006 | orchestrator | Thursday 09 April 2026 03:41:45 +0000 (0:00:00.071) 0:01:39.558 ******** 2026-04-09 03:42:46.919011 | orchestrator | 2026-04-09 03:42:46.919015 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-09 03:42:46.919020 | orchestrator | Thursday 09 April 2026 03:41:45 +0000 (0:00:00.073) 0:01:39.632 ******** 2026-04-09 03:42:46.919025 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:42:46.919030 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:42:46.919035 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:42:46.919039 | orchestrator | 2026-04-09 03:42:46.919044 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-09 03:42:46.919048 | orchestrator | Thursday 09 April 2026 03:41:54 +0000 (0:00:08.486) 0:01:48.118 ******** 2026-04-09 03:42:46.919053 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:42:46.919057 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:42:46.919062 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:42:46.919066 | orchestrator | 2026-04-09 03:42:46.919071 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-09 03:42:46.919076 | orchestrator | Thursday 09 April 2026 03:42:04 +0000 (0:00:10.542) 0:01:58.661 ******** 2026-04-09 03:42:46.919080 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:42:46.919085 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:42:46.919089 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:42:46.919094 | orchestrator | 2026-04-09 03:42:46.919098 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-09 03:42:46.919103 | orchestrator | Thursday 09 April 2026 03:42:15 +0000 (0:00:10.806) 0:02:09.467 ******** 2026-04-09 03:42:46.919107 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:42:46.919112 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:42:46.919116 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:42:46.919121 | orchestrator | 2026-04-09 03:42:46.919125 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-09 03:42:46.919130 | orchestrator | Thursday 09 April 2026 03:42:21 +0000 (0:00:05.987) 0:02:15.454 ******** 2026-04-09 03:42:46.919134 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:42:46.919139 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:42:46.919144 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:42:46.919148 | orchestrator | 2026-04-09 03:42:46.919152 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-09 03:42:46.919157 | orchestrator | Thursday 09 April 2026 03:42:28 +0000 (0:00:06.223) 0:02:21.678 ******** 2026-04-09 03:42:46.919162 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:42:46.919166 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:42:46.919171 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:42:46.919175 | orchestrator | 2026-04-09 03:42:46.919180 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-09 03:42:46.919184 | orchestrator | Thursday 09 April 2026 03:42:39 +0000 (0:00:11.230) 0:02:32.909 ******** 2026-04-09 03:42:46.919189 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:42:46.919193 | orchestrator | 2026-04-09 03:42:46.919198 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 03:42:46.919203 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 03:42:46.919208 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 03:42:46.919213 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 03:42:46.919217 | orchestrator | 2026-04-09 03:42:46.919222 | orchestrator | 2026-04-09 03:42:46.919227 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 03:42:46.919240 | orchestrator | Thursday 09 April 2026 03:42:46 +0000 (0:00:07.212) 0:02:40.122 ******** 2026-04-09 03:42:46.919247 | orchestrator | =============================================================================== 2026-04-09 03:42:46.919256 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.99s 2026-04-09 03:42:46.919268 | orchestrator | designate : Copying over designate.conf -------------------------------- 14.78s 2026-04-09 03:42:46.919290 | orchestrator | designate : Restart designate-worker container ------------------------- 11.23s 2026-04-09 03:42:46.919298 | orchestrator | designate : Restart designate-central container ------------------------ 10.81s 2026-04-09 03:42:46.919310 | orchestrator | designate : Restart designate-api container ---------------------------- 10.54s 2026-04-09 03:42:46.919317 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.49s 2026-04-09 03:42:46.919324 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.21s 2026-04-09 03:42:46.919332 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.55s 2026-04-09 03:42:46.919339 | orchestrator | designate : Copying over config.json files for services ----------------- 6.31s 2026-04-09 03:42:46.919346 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.22s 2026-04-09 03:42:46.919354 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.05s 2026-04-09 03:42:46.919361 | orchestrator | designate : Restart designate-producer container ------------------------ 5.99s 2026-04-09 03:42:46.919369 | orchestrator | designate : Check designate containers ---------------------------------- 4.79s 2026-04-09 03:42:46.919376 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.07s 2026-04-09 03:42:46.919384 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.00s 2026-04-09 03:42:46.919392 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.99s 2026-04-09 03:42:46.919399 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.79s 2026-04-09 03:42:46.919406 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.31s 2026-04-09 03:42:46.919414 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.13s 2026-04-09 03:42:46.919422 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.04s 2026-04-09 03:42:49.423363 | orchestrator | 2026-04-09 03:42:49 | INFO  | Task ff705470-12b7-4574-a214-c1dfd05563eb (octavia) was prepared for execution. 2026-04-09 03:42:49.423524 | orchestrator | 2026-04-09 03:42:49 | INFO  | It takes a moment until task ff705470-12b7-4574-a214-c1dfd05563eb (octavia) has been started and output is visible here. 2026-04-09 03:44:57.405772 | orchestrator | 2026-04-09 03:44:57.405887 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 03:44:57.405905 | orchestrator | 2026-04-09 03:44:57.405917 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 03:44:57.405929 | orchestrator | Thursday 09 April 2026 03:42:54 +0000 (0:00:00.284) 0:00:00.284 ******** 2026-04-09 03:44:57.405940 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:44:57.405952 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:44:57.405963 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:44:57.405974 | orchestrator | 2026-04-09 03:44:57.405986 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 03:44:57.405997 | orchestrator | Thursday 09 April 2026 03:42:54 +0000 (0:00:00.335) 0:00:00.619 ******** 2026-04-09 03:44:57.406008 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-09 03:44:57.406083 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-09 03:44:57.406111 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-09 03:44:57.406144 | orchestrator | 2026-04-09 03:44:57.406174 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-09 03:44:57.406195 | orchestrator | 2026-04-09 03:44:57.406213 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-09 03:44:57.406265 | orchestrator | Thursday 09 April 2026 03:42:54 +0000 (0:00:00.464) 0:00:01.084 ******** 2026-04-09 03:44:57.406287 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:44:57.406305 | orchestrator | 2026-04-09 03:44:57.406319 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-04-09 03:44:57.406332 | orchestrator | Thursday 09 April 2026 03:42:55 +0000 (0:00:00.600) 0:00:01.685 ******** 2026-04-09 03:44:57.406345 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-04-09 03:44:57.406358 | orchestrator | 2026-04-09 03:44:57.406372 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-04-09 03:44:57.406391 | orchestrator | Thursday 09 April 2026 03:42:58 +0000 (0:00:03.402) 0:00:05.087 ******** 2026-04-09 03:44:57.406420 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-04-09 03:44:57.406441 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-04-09 03:44:57.406459 | orchestrator | 2026-04-09 03:44:57.406480 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-04-09 03:44:57.406498 | orchestrator | Thursday 09 April 2026 03:43:05 +0000 (0:00:06.333) 0:00:11.420 ******** 2026-04-09 03:44:57.406517 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 03:44:57.406528 | orchestrator | 2026-04-09 03:44:57.406539 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-04-09 03:44:57.406550 | orchestrator | Thursday 09 April 2026 03:43:08 +0000 (0:00:03.317) 0:00:14.738 ******** 2026-04-09 03:44:57.406561 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 03:44:57.406572 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-09 03:44:57.406583 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-09 03:44:57.406594 | orchestrator | 2026-04-09 03:44:57.406604 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-04-09 03:44:57.406615 | orchestrator | Thursday 09 April 2026 03:43:16 +0000 (0:00:08.347) 0:00:23.085 ******** 2026-04-09 03:44:57.406626 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 03:44:57.406679 | orchestrator | 2026-04-09 03:44:57.406693 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-04-09 03:44:57.406720 | orchestrator | Thursday 09 April 2026 03:43:20 +0000 (0:00:03.264) 0:00:26.349 ******** 2026-04-09 03:44:57.406731 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-09 03:44:57.406742 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-09 03:44:57.406752 | orchestrator | 2026-04-09 03:44:57.406763 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-04-09 03:44:57.406775 | orchestrator | Thursday 09 April 2026 03:43:27 +0000 (0:00:07.355) 0:00:33.704 ******** 2026-04-09 03:44:57.406785 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-04-09 03:44:57.406796 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-04-09 03:44:57.406807 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-04-09 03:44:57.406818 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-04-09 03:44:57.406829 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-04-09 03:44:57.406839 | orchestrator | 2026-04-09 03:44:57.406850 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-09 03:44:57.406861 | orchestrator | Thursday 09 April 2026 03:43:42 +0000 (0:00:15.464) 0:00:49.168 ******** 2026-04-09 03:44:57.406872 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:44:57.406883 | orchestrator | 2026-04-09 03:44:57.406894 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-04-09 03:44:57.406918 | orchestrator | Thursday 09 April 2026 03:43:43 +0000 (0:00:00.857) 0:00:50.025 ******** 2026-04-09 03:44:57.406929 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:44:57.406940 | orchestrator | 2026-04-09 03:44:57.406951 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-04-09 03:44:57.406961 | orchestrator | Thursday 09 April 2026 03:43:48 +0000 (0:00:04.716) 0:00:54.742 ******** 2026-04-09 03:44:57.406973 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:44:57.406983 | orchestrator | 2026-04-09 03:44:57.406994 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-09 03:44:57.407025 | orchestrator | Thursday 09 April 2026 03:43:53 +0000 (0:00:04.681) 0:00:59.424 ******** 2026-04-09 03:44:57.407036 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:44:57.407047 | orchestrator | 2026-04-09 03:44:57.407058 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-04-09 03:44:57.407068 | orchestrator | Thursday 09 April 2026 03:43:56 +0000 (0:00:03.317) 0:01:02.741 ******** 2026-04-09 03:44:57.407079 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-09 03:44:57.407090 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-09 03:44:57.407101 | orchestrator | 2026-04-09 03:44:57.407111 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-04-09 03:44:57.407122 | orchestrator | Thursday 09 April 2026 03:44:06 +0000 (0:00:09.999) 0:01:12.740 ******** 2026-04-09 03:44:57.407133 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-04-09 03:44:57.407144 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-04-09 03:44:57.407156 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-04-09 03:44:57.407168 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-04-09 03:44:57.407179 | orchestrator | 2026-04-09 03:44:57.407190 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-04-09 03:44:57.407201 | orchestrator | Thursday 09 April 2026 03:44:22 +0000 (0:00:16.337) 0:01:29.077 ******** 2026-04-09 03:44:57.407216 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:44:57.407227 | orchestrator | 2026-04-09 03:44:57.407238 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-04-09 03:44:57.407248 | orchestrator | Thursday 09 April 2026 03:44:27 +0000 (0:00:04.631) 0:01:33.709 ******** 2026-04-09 03:44:57.407259 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:44:57.407270 | orchestrator | 2026-04-09 03:44:57.407281 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-04-09 03:44:57.407291 | orchestrator | Thursday 09 April 2026 03:44:33 +0000 (0:00:05.625) 0:01:39.335 ******** 2026-04-09 03:44:57.407302 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:44:57.407313 | orchestrator | 2026-04-09 03:44:57.407324 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-04-09 03:44:57.407335 | orchestrator | Thursday 09 April 2026 03:44:33 +0000 (0:00:00.224) 0:01:39.559 ******** 2026-04-09 03:44:57.407346 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:44:57.407356 | orchestrator | 2026-04-09 03:44:57.407367 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-09 03:44:57.407387 | orchestrator | Thursday 09 April 2026 03:44:37 +0000 (0:00:04.556) 0:01:44.115 ******** 2026-04-09 03:44:57.407407 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:44:57.407427 | orchestrator | 2026-04-09 03:44:57.407448 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-04-09 03:44:57.407470 | orchestrator | Thursday 09 April 2026 03:44:39 +0000 (0:00:01.174) 0:01:45.290 ******** 2026-04-09 03:44:57.407502 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:44:57.407515 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:44:57.407526 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:44:57.407537 | orchestrator | 2026-04-09 03:44:57.407548 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-04-09 03:44:57.407565 | orchestrator | Thursday 09 April 2026 03:44:44 +0000 (0:00:05.842) 0:01:51.133 ******** 2026-04-09 03:44:57.407576 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:44:57.407587 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:44:57.407598 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:44:57.407608 | orchestrator | 2026-04-09 03:44:57.407619 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-04-09 03:44:57.407629 | orchestrator | Thursday 09 April 2026 03:44:49 +0000 (0:00:04.725) 0:01:55.858 ******** 2026-04-09 03:44:57.407673 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:44:57.407685 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:44:57.407695 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:44:57.407706 | orchestrator | 2026-04-09 03:44:57.407717 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-04-09 03:44:57.407727 | orchestrator | Thursday 09 April 2026 03:44:50 +0000 (0:00:01.037) 0:01:56.895 ******** 2026-04-09 03:44:57.407738 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:44:57.407749 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:44:57.407759 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:44:57.407770 | orchestrator | 2026-04-09 03:44:57.407781 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-04-09 03:44:57.407791 | orchestrator | Thursday 09 April 2026 03:44:52 +0000 (0:00:01.814) 0:01:58.710 ******** 2026-04-09 03:44:57.407802 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:44:57.407813 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:44:57.407823 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:44:57.407834 | orchestrator | 2026-04-09 03:44:57.407844 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-04-09 03:44:57.407855 | orchestrator | Thursday 09 April 2026 03:44:53 +0000 (0:00:01.269) 0:01:59.979 ******** 2026-04-09 03:44:57.407866 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:44:57.407876 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:44:57.407887 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:44:57.407898 | orchestrator | 2026-04-09 03:44:57.407908 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-04-09 03:44:57.407919 | orchestrator | Thursday 09 April 2026 03:44:54 +0000 (0:00:01.258) 0:02:01.238 ******** 2026-04-09 03:44:57.407930 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:44:57.407940 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:44:57.407974 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:44:57.407985 | orchestrator | 2026-04-09 03:44:57.408006 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-04-09 03:45:24.062429 | orchestrator | Thursday 09 April 2026 03:44:57 +0000 (0:00:02.427) 0:02:03.665 ******** 2026-04-09 03:45:24.062522 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:45:24.062536 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:45:24.062546 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:45:24.062555 | orchestrator | 2026-04-09 03:45:24.062561 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-04-09 03:45:24.062567 | orchestrator | Thursday 09 April 2026 03:44:58 +0000 (0:00:01.545) 0:02:05.210 ******** 2026-04-09 03:45:24.062572 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:45:24.062578 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:45:24.062583 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:45:24.062588 | orchestrator | 2026-04-09 03:45:24.062593 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-04-09 03:45:24.062598 | orchestrator | Thursday 09 April 2026 03:44:59 +0000 (0:00:00.660) 0:02:05.871 ******** 2026-04-09 03:45:24.062603 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:45:24.062634 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:45:24.062640 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:45:24.062645 | orchestrator | 2026-04-09 03:45:24.062650 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-09 03:45:24.062656 | orchestrator | Thursday 09 April 2026 03:45:02 +0000 (0:00:03.189) 0:02:09.060 ******** 2026-04-09 03:45:24.062661 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:45:24.062667 | orchestrator | 2026-04-09 03:45:24.062720 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-09 03:45:24.062725 | orchestrator | Thursday 09 April 2026 03:45:03 +0000 (0:00:00.583) 0:02:09.643 ******** 2026-04-09 03:45:24.062730 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:45:24.062735 | orchestrator | 2026-04-09 03:45:24.062740 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-09 03:45:24.062745 | orchestrator | Thursday 09 April 2026 03:45:07 +0000 (0:00:04.096) 0:02:13.740 ******** 2026-04-09 03:45:24.062750 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:45:24.062755 | orchestrator | 2026-04-09 03:45:24.062760 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-09 03:45:24.062764 | orchestrator | Thursday 09 April 2026 03:45:10 +0000 (0:00:03.176) 0:02:16.916 ******** 2026-04-09 03:45:24.062770 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-09 03:45:24.062779 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-09 03:45:24.062787 | orchestrator | 2026-04-09 03:45:24.062795 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-09 03:45:24.062802 | orchestrator | Thursday 09 April 2026 03:45:17 +0000 (0:00:06.824) 0:02:23.741 ******** 2026-04-09 03:45:24.062807 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:45:24.062812 | orchestrator | 2026-04-09 03:45:24.062817 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-09 03:45:24.062822 | orchestrator | Thursday 09 April 2026 03:45:21 +0000 (0:00:04.047) 0:02:27.789 ******** 2026-04-09 03:45:24.062826 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:45:24.062831 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:45:24.062836 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:45:24.062841 | orchestrator | 2026-04-09 03:45:24.062845 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-09 03:45:24.062852 | orchestrator | Thursday 09 April 2026 03:45:22 +0000 (0:00:00.544) 0:02:28.333 ******** 2026-04-09 03:45:24.062878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 03:45:24.062901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 03:45:24.062914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 03:45:24.062920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 03:45:24.062927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 03:45:24.062935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 03:45:24.062942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 03:45:24.062948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 03:45:24.062962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 03:45:25.554357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 03:45:25.554447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 03:45:25.554460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 03:45:25.554486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:45:25.554496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:45:25.554527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:45:25.554537 | orchestrator | 2026-04-09 03:45:25.554548 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-09 03:45:25.554559 | orchestrator | Thursday 09 April 2026 03:45:24 +0000 (0:00:02.405) 0:02:30.738 ******** 2026-04-09 03:45:25.554568 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:45:25.554577 | orchestrator | 2026-04-09 03:45:25.554586 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-09 03:45:25.554595 | orchestrator | Thursday 09 April 2026 03:45:24 +0000 (0:00:00.145) 0:02:30.884 ******** 2026-04-09 03:45:25.554604 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:45:25.554627 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:45:25.554637 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:45:25.554645 | orchestrator | 2026-04-09 03:45:25.554654 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-09 03:45:25.554663 | orchestrator | Thursday 09 April 2026 03:45:24 +0000 (0:00:00.337) 0:02:31.222 ******** 2026-04-09 03:45:25.554720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 03:45:25.554733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 03:45:25.554748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 03:45:25.554759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 03:45:25.554775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:45:25.554784 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:45:25.554802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 03:45:30.575660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 03:45:30.575812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 03:45:30.575839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 03:45:30.575869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:45:30.575906 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:45:30.575916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 03:45:30.575925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 03:45:30.575947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 03:45:30.575954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 03:45:30.575965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:45:30.575977 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:45:30.575984 | orchestrator | 2026-04-09 03:45:30.575991 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-09 03:45:30.575999 | orchestrator | Thursday 09 April 2026 03:45:25 +0000 (0:00:00.708) 0:02:31.930 ******** 2026-04-09 03:45:30.576006 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:45:30.576013 | orchestrator | 2026-04-09 03:45:30.576019 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-09 03:45:30.576025 | orchestrator | Thursday 09 April 2026 03:45:26 +0000 (0:00:00.832) 0:02:32.763 ******** 2026-04-09 03:45:30.576032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 03:45:30.576040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 03:45:30.576052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 03:45:32.106004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 03:45:32.106147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 03:45:32.106156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 03:45:32.106161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 03:45:32.106167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 03:45:32.106172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 03:45:32.106188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 03:45:32.106193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 03:45:32.106208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 03:45:32.106213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:45:32.106219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:45:32.106223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:45:32.106238 | orchestrator | 2026-04-09 03:45:32.106245 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-09 03:45:32.106250 | orchestrator | Thursday 09 April 2026 03:45:31 +0000 (0:00:05.007) 0:02:37.771 ******** 2026-04-09 03:45:32.106261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 03:45:32.218792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 03:45:32.218888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 03:45:32.218899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 03:45:32.218908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:45:32.218914 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:45:32.218922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 03:45:32.218929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 03:45:32.218970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 03:45:32.218987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 03:45:32.218997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:45:32.219007 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:45:32.219016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 03:45:32.219025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 03:45:32.219034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 03:45:32.219059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 03:45:33.181592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:45:33.181765 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:45:33.181787 | orchestrator | 2026-04-09 03:45:33.181800 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-09 03:45:33.181816 | orchestrator | Thursday 09 April 2026 03:45:32 +0000 (0:00:00.719) 0:02:38.490 ******** 2026-04-09 03:45:33.181830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 03:45:33.181844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 03:45:33.181856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 03:45:33.181869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 03:45:33.181924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:45:33.181942 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:45:33.181971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 03:45:33.181997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 03:45:33.182087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 03:45:33.182112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 03:45:33.182148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:45:33.182169 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:45:33.182226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 03:45:37.955876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 03:45:37.955989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 03:45:37.956006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 03:45:37.956019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 03:45:37.956058 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:45:37.956072 | orchestrator | 2026-04-09 03:45:37.956084 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-09 03:45:37.956096 | orchestrator | Thursday 09 April 2026 03:45:33 +0000 (0:00:01.508) 0:02:39.998 ******** 2026-04-09 03:45:37.956108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 03:45:37.956155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 03:45:37.956169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 03:45:37.956181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 03:45:37.956193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 03:45:37.956215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 03:45:37.956228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 03:45:37.956267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 03:45:54.929181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 03:45:54.929275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 03:45:54.929287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 03:45:54.929316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 03:45:54.929325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:45:54.929333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:45:54.929367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:45:54.929375 | orchestrator | 2026-04-09 03:45:54.929384 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-09 03:45:54.929392 | orchestrator | Thursday 09 April 2026 03:45:38 +0000 (0:00:05.212) 0:02:45.211 ******** 2026-04-09 03:45:54.929399 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-09 03:45:54.929407 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-09 03:45:54.929414 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-09 03:45:54.929420 | orchestrator | 2026-04-09 03:45:54.929427 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-09 03:45:54.929434 | orchestrator | Thursday 09 April 2026 03:45:40 +0000 (0:00:01.741) 0:02:46.952 ******** 2026-04-09 03:45:54.929442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 03:45:54.929457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 03:45:54.929464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 03:45:54.929481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 03:46:10.818890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 03:46:10.819041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 03:46:10.819072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 03:46:10.819127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 03:46:10.819150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 03:46:10.819170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 03:46:10.819227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 03:46:10.819241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 03:46:10.819253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:46:10.819276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:46:10.819287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:46:10.819299 | orchestrator | 2026-04-09 03:46:10.819314 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-09 03:46:10.819330 | orchestrator | Thursday 09 April 2026 03:45:58 +0000 (0:00:17.706) 0:03:04.659 ******** 2026-04-09 03:46:10.819348 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:46:10.819369 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:46:10.819389 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:46:10.819407 | orchestrator | 2026-04-09 03:46:10.819425 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-09 03:46:10.819444 | orchestrator | Thursday 09 April 2026 03:46:00 +0000 (0:00:01.855) 0:03:06.514 ******** 2026-04-09 03:46:10.819463 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-09 03:46:10.819481 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-09 03:46:10.819501 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-09 03:46:10.819517 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-09 03:46:10.819534 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-09 03:46:10.819552 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-09 03:46:10.819570 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-09 03:46:10.819588 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-09 03:46:10.819606 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-09 03:46:10.819624 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-09 03:46:10.819643 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-09 03:46:10.819662 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-09 03:46:10.819676 | orchestrator | 2026-04-09 03:46:10.819687 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-09 03:46:10.819707 | orchestrator | Thursday 09 April 2026 03:46:05 +0000 (0:00:05.296) 0:03:11.810 ******** 2026-04-09 03:46:10.819718 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-09 03:46:10.819729 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-09 03:46:10.819783 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-09 03:46:19.654403 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-09 03:46:19.654499 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-09 03:46:19.654508 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-09 03:46:19.654515 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-09 03:46:19.654521 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-09 03:46:19.654527 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-09 03:46:19.654533 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-09 03:46:19.654539 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-09 03:46:19.654545 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-09 03:46:19.654551 | orchestrator | 2026-04-09 03:46:19.654558 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-09 03:46:19.654565 | orchestrator | Thursday 09 April 2026 03:46:10 +0000 (0:00:05.268) 0:03:17.079 ******** 2026-04-09 03:46:19.654571 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-09 03:46:19.654577 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-09 03:46:19.654583 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-09 03:46:19.654589 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-09 03:46:19.654595 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-09 03:46:19.654600 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-09 03:46:19.654606 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-09 03:46:19.654612 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-09 03:46:19.654618 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-09 03:46:19.654623 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-09 03:46:19.654629 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-09 03:46:19.654635 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-09 03:46:19.654641 | orchestrator | 2026-04-09 03:46:19.654647 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-04-09 03:46:19.654652 | orchestrator | Thursday 09 April 2026 03:46:16 +0000 (0:00:05.633) 0:03:22.713 ******** 2026-04-09 03:46:19.654661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 03:46:19.654670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 03:46:19.654727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 03:46:19.654736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 03:46:19.654812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 03:46:19.654822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 03:46:19.654828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 03:46:19.654836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 03:46:19.654854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 03:46:19.654867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 03:47:42.430416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 03:47:42.430530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 03:47:42.430545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:47:42.430557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:47:42.430593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 03:47:42.430604 | orchestrator | 2026-04-09 03:47:42.430615 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-09 03:47:42.430626 | orchestrator | Thursday 09 April 2026 03:46:20 +0000 (0:00:04.130) 0:03:26.844 ******** 2026-04-09 03:47:42.430636 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:47:42.430648 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:47:42.430657 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:47:42.430667 | orchestrator | 2026-04-09 03:47:42.430692 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-09 03:47:42.430718 | orchestrator | Thursday 09 April 2026 03:46:21 +0000 (0:00:00.558) 0:03:27.402 ******** 2026-04-09 03:47:42.430738 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:47:42.430748 | orchestrator | 2026-04-09 03:47:42.430777 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-09 03:47:42.430787 | orchestrator | Thursday 09 April 2026 03:46:23 +0000 (0:00:02.181) 0:03:29.584 ******** 2026-04-09 03:47:42.430797 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:47:42.430806 | orchestrator | 2026-04-09 03:47:42.430816 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-09 03:47:42.430825 | orchestrator | Thursday 09 April 2026 03:46:25 +0000 (0:00:02.151) 0:03:31.735 ******** 2026-04-09 03:47:42.430835 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:47:42.430865 | orchestrator | 2026-04-09 03:47:42.430877 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-09 03:47:42.430889 | orchestrator | Thursday 09 April 2026 03:46:27 +0000 (0:00:02.263) 0:03:33.999 ******** 2026-04-09 03:47:42.430918 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:47:42.430928 | orchestrator | 2026-04-09 03:47:42.430938 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-09 03:47:42.430947 | orchestrator | Thursday 09 April 2026 03:46:30 +0000 (0:00:02.285) 0:03:36.284 ******** 2026-04-09 03:47:42.430957 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:47:42.430968 | orchestrator | 2026-04-09 03:47:42.430978 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-09 03:47:42.430987 | orchestrator | Thursday 09 April 2026 03:46:52 +0000 (0:00:22.612) 0:03:58.897 ******** 2026-04-09 03:47:42.430998 | orchestrator | 2026-04-09 03:47:42.431008 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-09 03:47:42.431018 | orchestrator | Thursday 09 April 2026 03:46:52 +0000 (0:00:00.071) 0:03:58.968 ******** 2026-04-09 03:47:42.431027 | orchestrator | 2026-04-09 03:47:42.431036 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-09 03:47:42.431048 | orchestrator | Thursday 09 April 2026 03:46:52 +0000 (0:00:00.073) 0:03:59.041 ******** 2026-04-09 03:47:42.431060 | orchestrator | 2026-04-09 03:47:42.431071 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-09 03:47:42.431080 | orchestrator | Thursday 09 April 2026 03:46:52 +0000 (0:00:00.069) 0:03:59.111 ******** 2026-04-09 03:47:42.431090 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:47:42.431100 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:47:42.431110 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:47:42.431120 | orchestrator | 2026-04-09 03:47:42.431130 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-09 03:47:42.431139 | orchestrator | Thursday 09 April 2026 03:47:05 +0000 (0:00:12.540) 0:04:11.652 ******** 2026-04-09 03:47:42.431162 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:47:42.431172 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:47:42.431182 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:47:42.431192 | orchestrator | 2026-04-09 03:47:42.431202 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-09 03:47:42.431211 | orchestrator | Thursday 09 April 2026 03:47:17 +0000 (0:00:11.755) 0:04:23.407 ******** 2026-04-09 03:47:42.431219 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:47:42.431228 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:47:42.431237 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:47:42.431246 | orchestrator | 2026-04-09 03:47:42.431257 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-09 03:47:42.431267 | orchestrator | Thursday 09 April 2026 03:47:25 +0000 (0:00:08.391) 0:04:31.799 ******** 2026-04-09 03:47:42.431276 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:47:42.431286 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:47:42.431295 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:47:42.431306 | orchestrator | 2026-04-09 03:47:42.431316 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-09 03:47:42.431326 | orchestrator | Thursday 09 April 2026 03:47:31 +0000 (0:00:05.792) 0:04:37.592 ******** 2026-04-09 03:47:42.431336 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:47:42.431346 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:47:42.431359 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:47:42.431374 | orchestrator | 2026-04-09 03:47:42.431384 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 03:47:42.431395 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 03:47:42.431406 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 03:47:42.431416 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 03:47:42.431426 | orchestrator | 2026-04-09 03:47:42.431435 | orchestrator | 2026-04-09 03:47:42.431445 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 03:47:42.431454 | orchestrator | Thursday 09 April 2026 03:47:42 +0000 (0:00:11.088) 0:04:48.681 ******** 2026-04-09 03:47:42.431463 | orchestrator | =============================================================================== 2026-04-09 03:47:42.431472 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.61s 2026-04-09 03:47:42.431481 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.71s 2026-04-09 03:47:42.431491 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.34s 2026-04-09 03:47:42.431500 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.46s 2026-04-09 03:47:42.431509 | orchestrator | octavia : Restart octavia-api container -------------------------------- 12.54s 2026-04-09 03:47:42.431528 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.76s 2026-04-09 03:47:42.431538 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 11.09s 2026-04-09 03:47:42.431548 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.00s 2026-04-09 03:47:42.431558 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 8.39s 2026-04-09 03:47:42.431569 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.35s 2026-04-09 03:47:42.431580 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.36s 2026-04-09 03:47:42.431590 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.82s 2026-04-09 03:47:42.431600 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.33s 2026-04-09 03:47:42.431617 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.84s 2026-04-09 03:47:42.431634 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.79s 2026-04-09 03:47:42.916382 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.63s 2026-04-09 03:47:42.916476 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.63s 2026-04-09 03:47:42.916488 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.30s 2026-04-09 03:47:42.916497 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.27s 2026-04-09 03:47:42.916506 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.21s 2026-04-09 03:47:45.561318 | orchestrator | 2026-04-09 03:47:45 | INFO  | Task a9d558b0-8332-4447-af54-70eb70c885dc (ceilometer) was prepared for execution. 2026-04-09 03:47:45.561419 | orchestrator | 2026-04-09 03:47:45 | INFO  | It takes a moment until task a9d558b0-8332-4447-af54-70eb70c885dc (ceilometer) has been started and output is visible here. 2026-04-09 03:48:09.825314 | orchestrator | 2026-04-09 03:48:09.825429 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 03:48:09.825446 | orchestrator | 2026-04-09 03:48:09.825460 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 03:48:09.825473 | orchestrator | Thursday 09 April 2026 03:47:50 +0000 (0:00:00.301) 0:00:00.301 ******** 2026-04-09 03:48:09.825486 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:48:09.825500 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:48:09.825513 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:48:09.825526 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:48:09.825539 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:48:09.825551 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:48:09.825564 | orchestrator | 2026-04-09 03:48:09.825576 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 03:48:09.825589 | orchestrator | Thursday 09 April 2026 03:47:50 +0000 (0:00:00.759) 0:00:01.061 ******** 2026-04-09 03:48:09.825602 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-04-09 03:48:09.825615 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-04-09 03:48:09.825627 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-04-09 03:48:09.825640 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-04-09 03:48:09.825652 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-04-09 03:48:09.825664 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-04-09 03:48:09.825677 | orchestrator | 2026-04-09 03:48:09.825689 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-04-09 03:48:09.825701 | orchestrator | 2026-04-09 03:48:09.825714 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-09 03:48:09.825725 | orchestrator | Thursday 09 April 2026 03:47:51 +0000 (0:00:00.683) 0:00:01.744 ******** 2026-04-09 03:48:09.825739 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:48:09.825753 | orchestrator | 2026-04-09 03:48:09.825766 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-04-09 03:48:09.825778 | orchestrator | Thursday 09 April 2026 03:47:52 +0000 (0:00:01.319) 0:00:03.064 ******** 2026-04-09 03:48:09.825790 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:48:09.825803 | orchestrator | 2026-04-09 03:48:09.825815 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-04-09 03:48:09.825827 | orchestrator | Thursday 09 April 2026 03:47:53 +0000 (0:00:00.140) 0:00:03.204 ******** 2026-04-09 03:48:09.825840 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:48:09.825852 | orchestrator | 2026-04-09 03:48:09.825865 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-04-09 03:48:09.825938 | orchestrator | Thursday 09 April 2026 03:47:53 +0000 (0:00:00.140) 0:00:03.345 ******** 2026-04-09 03:48:09.825954 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 03:48:09.825968 | orchestrator | 2026-04-09 03:48:09.825982 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-04-09 03:48:09.825995 | orchestrator | Thursday 09 April 2026 03:47:56 +0000 (0:00:03.803) 0:00:07.148 ******** 2026-04-09 03:48:09.826009 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 03:48:09.826071 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-04-09 03:48:09.826086 | orchestrator | 2026-04-09 03:48:09.826100 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-04-09 03:48:09.826113 | orchestrator | Thursday 09 April 2026 03:48:00 +0000 (0:00:03.850) 0:00:10.999 ******** 2026-04-09 03:48:09.826128 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 03:48:09.826141 | orchestrator | 2026-04-09 03:48:09.826155 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-04-09 03:48:09.826184 | orchestrator | Thursday 09 April 2026 03:48:03 +0000 (0:00:03.195) 0:00:14.195 ******** 2026-04-09 03:48:09.826199 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-04-09 03:48:09.826214 | orchestrator | 2026-04-09 03:48:09.826228 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-04-09 03:48:09.826242 | orchestrator | Thursday 09 April 2026 03:48:07 +0000 (0:00:03.973) 0:00:18.169 ******** 2026-04-09 03:48:09.826258 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:48:09.826273 | orchestrator | 2026-04-09 03:48:09.826287 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-04-09 03:48:09.826299 | orchestrator | Thursday 09 April 2026 03:48:08 +0000 (0:00:00.151) 0:00:18.320 ******** 2026-04-09 03:48:09.826311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:09.826343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:09.826352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:09.826361 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:09.826382 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:09.826398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 03:48:09.826407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 03:48:09.826422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 03:48:15.174248 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:15.174332 | orchestrator | 2026-04-09 03:48:15.174340 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-04-09 03:48:15.174362 | orchestrator | Thursday 09 April 2026 03:48:09 +0000 (0:00:01.685) 0:00:20.006 ******** 2026-04-09 03:48:15.174367 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 03:48:15.174373 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 03:48:15.174378 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 03:48:15.174382 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 03:48:15.174387 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 03:48:15.174391 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 03:48:15.174396 | orchestrator | 2026-04-09 03:48:15.174400 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-04-09 03:48:15.174406 | orchestrator | Thursday 09 April 2026 03:48:11 +0000 (0:00:01.782) 0:00:21.789 ******** 2026-04-09 03:48:15.174411 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:48:15.174416 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:48:15.174420 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:48:15.174425 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:48:15.174429 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:48:15.174434 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:48:15.174438 | orchestrator | 2026-04-09 03:48:15.174443 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-04-09 03:48:15.174448 | orchestrator | Thursday 09 April 2026 03:48:12 +0000 (0:00:00.695) 0:00:22.484 ******** 2026-04-09 03:48:15.174453 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:48:15.174457 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:48:15.174462 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:48:15.174467 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:48:15.174471 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:48:15.174476 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:48:15.174480 | orchestrator | 2026-04-09 03:48:15.174485 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-04-09 03:48:15.174490 | orchestrator | Thursday 09 April 2026 03:48:13 +0000 (0:00:00.877) 0:00:23.362 ******** 2026-04-09 03:48:15.174495 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:48:15.174499 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:48:15.174503 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:48:15.174508 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:48:15.174512 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:48:15.174517 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:48:15.174521 | orchestrator | 2026-04-09 03:48:15.174526 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-04-09 03:48:15.174539 | orchestrator | Thursday 09 April 2026 03:48:13 +0000 (0:00:00.691) 0:00:24.054 ******** 2026-04-09 03:48:15.174546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 03:48:15.174555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 03:48:15.174570 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:48:15.174594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 03:48:15.174603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 03:48:15.174610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 03:48:15.174618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 03:48:15.174626 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:48:15.174635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-09 03:48:15.174644 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:48:15.174652 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:48:15.174661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-09 03:48:15.174678 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:48:15.174709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-09 03:48:20.205733 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:48:20.205876 | orchestrator | 2026-04-09 03:48:20.206096 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-04-09 03:48:20.206123 | orchestrator | Thursday 09 April 2026 03:48:15 +0000 (0:00:01.305) 0:00:25.359 ******** 2026-04-09 03:48:20.206142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 03:48:20.206161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 03:48:20.206195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 03:48:20.206208 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:48:20.206222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 03:48:20.206265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 03:48:20.206283 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:48:20.206299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 03:48:20.206316 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:48:20.206355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-09 03:48:20.206373 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:48:20.206388 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-09 03:48:20.206403 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:48:20.206426 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-09 03:48:20.206441 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:48:20.206455 | orchestrator | 2026-04-09 03:48:20.206471 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-04-09 03:48:20.206501 | orchestrator | Thursday 09 April 2026 03:48:16 +0000 (0:00:00.962) 0:00:26.322 ******** 2026-04-09 03:48:20.206515 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 03:48:20.206529 | orchestrator | 2026-04-09 03:48:20.206542 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-04-09 03:48:20.206556 | orchestrator | Thursday 09 April 2026 03:48:16 +0000 (0:00:00.721) 0:00:27.043 ******** 2026-04-09 03:48:20.206570 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:48:20.206585 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:48:20.206598 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:48:20.206611 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:48:20.206624 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:48:20.206637 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:48:20.206651 | orchestrator | 2026-04-09 03:48:20.206664 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-04-09 03:48:20.206677 | orchestrator | Thursday 09 April 2026 03:48:17 +0000 (0:00:00.850) 0:00:27.894 ******** 2026-04-09 03:48:20.206690 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:48:20.206703 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:48:20.206715 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:48:20.206728 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:48:20.206742 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:48:20.206755 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:48:20.206768 | orchestrator | 2026-04-09 03:48:20.206777 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-04-09 03:48:20.206786 | orchestrator | Thursday 09 April 2026 03:48:18 +0000 (0:00:01.027) 0:00:28.921 ******** 2026-04-09 03:48:20.206794 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:48:20.206802 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:48:20.206810 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:48:20.206818 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:48:20.206826 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:48:20.206833 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:48:20.206841 | orchestrator | 2026-04-09 03:48:20.206849 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-04-09 03:48:20.206857 | orchestrator | Thursday 09 April 2026 03:48:19 +0000 (0:00:00.840) 0:00:29.762 ******** 2026-04-09 03:48:20.206865 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:48:20.206873 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:48:20.206881 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:48:20.206920 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:48:20.206934 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:48:20.206943 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:48:20.206951 | orchestrator | 2026-04-09 03:48:25.581369 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-04-09 03:48:25.581464 | orchestrator | Thursday 09 April 2026 03:48:20 +0000 (0:00:00.632) 0:00:30.395 ******** 2026-04-09 03:48:25.581476 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 03:48:25.581487 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 03:48:25.581496 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 03:48:25.581505 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 03:48:25.581514 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 03:48:25.581523 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 03:48:25.581532 | orchestrator | 2026-04-09 03:48:25.581542 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-04-09 03:48:25.581551 | orchestrator | Thursday 09 April 2026 03:48:21 +0000 (0:00:01.615) 0:00:32.010 ******** 2026-04-09 03:48:25.581563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 03:48:25.581601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 03:48:25.581612 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:48:25.581634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 03:48:25.581644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 03:48:25.581653 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:48:25.581662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 03:48:25.581689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 03:48:25.581699 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:48:25.581709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-09 03:48:25.581725 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:48:25.581734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-09 03:48:25.581743 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:48:25.581756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-09 03:48:25.581765 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:48:25.581774 | orchestrator | 2026-04-09 03:48:25.581783 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-04-09 03:48:25.581792 | orchestrator | Thursday 09 April 2026 03:48:22 +0000 (0:00:00.959) 0:00:32.969 ******** 2026-04-09 03:48:25.581801 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:48:25.581810 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:48:25.581818 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:48:25.581827 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:48:25.581836 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:48:25.581844 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:48:25.581853 | orchestrator | 2026-04-09 03:48:25.581862 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-04-09 03:48:25.581871 | orchestrator | Thursday 09 April 2026 03:48:23 +0000 (0:00:00.856) 0:00:33.825 ******** 2026-04-09 03:48:25.581880 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 03:48:25.581890 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 03:48:25.581929 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 03:48:25.581939 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 03:48:25.581948 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 03:48:25.581958 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 03:48:25.581968 | orchestrator | 2026-04-09 03:48:25.581978 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-04-09 03:48:25.581989 | orchestrator | Thursday 09 April 2026 03:48:25 +0000 (0:00:01.494) 0:00:35.320 ******** 2026-04-09 03:48:25.582007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 03:48:31.721434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 03:48:31.721535 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:48:31.721552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 03:48:31.721584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 03:48:31.721593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 03:48:31.721600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 03:48:31.721607 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:48:31.721613 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:48:31.721620 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-09 03:48:31.721652 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:48:31.721684 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-09 03:48:31.721698 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:48:31.721709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-09 03:48:31.721719 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:48:31.721729 | orchestrator | 2026-04-09 03:48:31.721739 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-04-09 03:48:31.721750 | orchestrator | Thursday 09 April 2026 03:48:26 +0000 (0:00:01.217) 0:00:36.538 ******** 2026-04-09 03:48:31.721760 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:48:31.721770 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:48:31.721780 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:48:31.721791 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:48:31.721801 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:48:31.721818 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:48:31.721829 | orchestrator | 2026-04-09 03:48:31.721845 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-04-09 03:48:31.721856 | orchestrator | Thursday 09 April 2026 03:48:27 +0000 (0:00:00.814) 0:00:37.352 ******** 2026-04-09 03:48:31.721866 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:48:31.721877 | orchestrator | 2026-04-09 03:48:31.721887 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-04-09 03:48:31.721897 | orchestrator | Thursday 09 April 2026 03:48:27 +0000 (0:00:00.165) 0:00:37.518 ******** 2026-04-09 03:48:31.721957 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:48:31.721966 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:48:31.721973 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:48:31.721982 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:48:31.721993 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:48:31.722011 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:48:31.722092 | orchestrator | 2026-04-09 03:48:31.722104 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-09 03:48:31.722117 | orchestrator | Thursday 09 April 2026 03:48:27 +0000 (0:00:00.639) 0:00:38.158 ******** 2026-04-09 03:48:31.722183 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:48:31.722193 | orchestrator | 2026-04-09 03:48:31.722201 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-04-09 03:48:31.722208 | orchestrator | Thursday 09 April 2026 03:48:29 +0000 (0:00:01.435) 0:00:39.593 ******** 2026-04-09 03:48:31.722215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:31.722235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:32.247398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:32.247492 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:32.247520 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:32.247529 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:32.247558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 03:48:32.247565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 03:48:32.247583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 03:48:32.247588 | orchestrator | 2026-04-09 03:48:32.247593 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-04-09 03:48:32.247600 | orchestrator | Thursday 09 April 2026 03:48:31 +0000 (0:00:02.312) 0:00:41.905 ******** 2026-04-09 03:48:32.247605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 03:48:32.247613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 03:48:32.247623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 03:48:32.247627 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:48:32.247633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 03:48:32.247637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 03:48:32.247646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 03:48:34.363598 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:48:34.363680 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:48:34.363690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-09 03:48:34.363700 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:48:34.363730 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-09 03:48:34.363756 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:48:34.363763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-09 03:48:34.363769 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:48:34.363775 | orchestrator | 2026-04-09 03:48:34.363782 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-04-09 03:48:34.363790 | orchestrator | Thursday 09 April 2026 03:48:32 +0000 (0:00:00.986) 0:00:42.891 ******** 2026-04-09 03:48:34.363797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 03:48:34.363804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 03:48:34.363826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 03:48:34.363833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 03:48:34.363848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 03:48:34.363855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 03:48:34.363861 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:48:34.363867 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:48:34.363873 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:48:34.363879 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-09 03:48:34.363885 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:48:34.363891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-09 03:48:34.363897 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:48:34.363961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-09 03:48:42.609415 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:48:42.609711 | orchestrator | 2026-04-09 03:48:42.609735 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-04-09 03:48:42.609748 | orchestrator | Thursday 09 April 2026 03:48:34 +0000 (0:00:01.656) 0:00:44.548 ******** 2026-04-09 03:48:42.609781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:42.609807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:42.609826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:42.609847 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:42.609866 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:42.609941 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:42.609978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 03:48:42.609992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 03:48:42.610006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 03:48:42.610080 | orchestrator | 2026-04-09 03:48:42.610097 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-04-09 03:48:42.610110 | orchestrator | Thursday 09 April 2026 03:48:37 +0000 (0:00:02.656) 0:00:47.205 ******** 2026-04-09 03:48:42.610123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:42.610137 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:42.610160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:53.188429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:53.188510 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:53.188517 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:53.188523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 03:48:53.188529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 03:48:53.188534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 03:48:53.188553 | orchestrator | 2026-04-09 03:48:53.188559 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-04-09 03:48:53.188565 | orchestrator | Thursday 09 April 2026 03:48:42 +0000 (0:00:05.588) 0:00:52.794 ******** 2026-04-09 03:48:53.188580 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 03:48:53.188586 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 03:48:53.188590 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 03:48:53.188594 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 03:48:53.188598 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 03:48:53.188602 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 03:48:53.188606 | orchestrator | 2026-04-09 03:48:53.188611 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-04-09 03:48:53.188615 | orchestrator | Thursday 09 April 2026 03:48:44 +0000 (0:00:01.823) 0:00:54.617 ******** 2026-04-09 03:48:53.188619 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:48:53.188623 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:48:53.188627 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:48:53.188631 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:48:53.188639 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:48:53.188643 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:48:53.188647 | orchestrator | 2026-04-09 03:48:53.188651 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-04-09 03:48:53.188656 | orchestrator | Thursday 09 April 2026 03:48:45 +0000 (0:00:00.721) 0:00:55.338 ******** 2026-04-09 03:48:53.188660 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:48:53.188664 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:48:53.188668 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:48:53.188673 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:48:53.188677 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:48:53.188681 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:48:53.188685 | orchestrator | 2026-04-09 03:48:53.188689 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-04-09 03:48:53.188693 | orchestrator | Thursday 09 April 2026 03:48:47 +0000 (0:00:01.926) 0:00:57.265 ******** 2026-04-09 03:48:53.188698 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:48:53.188706 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:48:53.188713 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:48:53.188724 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:48:53.188732 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:48:53.188739 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:48:53.188747 | orchestrator | 2026-04-09 03:48:53.188754 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-04-09 03:48:53.188761 | orchestrator | Thursday 09 April 2026 03:48:48 +0000 (0:00:01.590) 0:00:58.855 ******** 2026-04-09 03:48:53.188768 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 03:48:53.188775 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 03:48:53.188782 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 03:48:53.188788 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 03:48:53.188795 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 03:48:53.188802 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 03:48:53.188809 | orchestrator | 2026-04-09 03:48:53.188816 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-04-09 03:48:53.188830 | orchestrator | Thursday 09 April 2026 03:48:50 +0000 (0:00:01.795) 0:01:00.651 ******** 2026-04-09 03:48:53.188838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:53.188845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:53.188852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:53.188870 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:54.182372 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:54.182473 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-09 03:48:54.182519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 03:48:54.182531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 03:48:54.182542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 03:48:54.182552 | orchestrator | 2026-04-09 03:48:54.182561 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-04-09 03:48:54.182571 | orchestrator | Thursday 09 April 2026 03:48:53 +0000 (0:00:02.713) 0:01:03.364 ******** 2026-04-09 03:48:54.182590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 03:48:54.182610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 03:48:54.182621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 03:48:54.182637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 03:48:54.182646 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:48:54.182655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 03:48:54.182661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 03:48:54.182666 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:48:54.182672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-09 03:48:54.182677 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:48:54.182682 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:48:54.182697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-09 03:48:57.816635 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:48:57.816733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-09 03:48:57.816751 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:48:57.816761 | orchestrator | 2026-04-09 03:48:57.816772 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-04-09 03:48:57.816783 | orchestrator | Thursday 09 April 2026 03:48:54 +0000 (0:00:01.004) 0:01:04.368 ******** 2026-04-09 03:48:57.816793 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:48:57.816802 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:48:57.816812 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:48:57.816822 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:48:57.816831 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:48:57.816841 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:48:57.816850 | orchestrator | 2026-04-09 03:48:57.816860 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-04-09 03:48:57.816869 | orchestrator | Thursday 09 April 2026 03:48:55 +0000 (0:00:00.901) 0:01:05.270 ******** 2026-04-09 03:48:57.816881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 03:48:57.816892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 03:48:57.816904 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:48:57.816914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 03:48:57.817011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 03:48:57.817061 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:48:57.817103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 03:48:57.817123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 03:48:57.817137 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:48:57.817147 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-09 03:48:57.817158 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:48:57.817170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-09 03:48:57.817181 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:48:57.817192 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-09 03:48:57.817213 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:48:57.817230 | orchestrator | 2026-04-09 03:48:57.817242 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-04-09 03:48:57.817253 | orchestrator | Thursday 09 April 2026 03:48:55 +0000 (0:00:00.912) 0:01:06.183 ******** 2026-04-09 03:48:57.817272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 03:49:34.151795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 03:49:34.151896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 03:49:34.151906 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-09 03:49:34.151915 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-09 03:49:34.151922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 03:49:34.151950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 03:49:34.152022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 03:49:34.152030 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-09 03:49:34.152036 | orchestrator | 2026-04-09 03:49:34.152044 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-09 03:49:34.152052 | orchestrator | Thursday 09 April 2026 03:48:57 +0000 (0:00:01.818) 0:01:08.002 ******** 2026-04-09 03:49:34.152059 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:49:34.152067 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:49:34.152073 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:49:34.152080 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:49:34.152088 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:49:34.152094 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:49:34.152100 | orchestrator | 2026-04-09 03:49:34.152106 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-04-09 03:49:34.152112 | orchestrator | Thursday 09 April 2026 03:48:58 +0000 (0:00:00.711) 0:01:08.714 ******** 2026-04-09 03:49:34.152118 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:49:34.152124 | orchestrator | 2026-04-09 03:49:34.152131 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-09 03:49:34.152137 | orchestrator | Thursday 09 April 2026 03:49:03 +0000 (0:00:04.617) 0:01:13.331 ******** 2026-04-09 03:49:34.152144 | orchestrator | 2026-04-09 03:49:34.152151 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-09 03:49:34.152157 | orchestrator | Thursday 09 April 2026 03:49:03 +0000 (0:00:00.078) 0:01:13.410 ******** 2026-04-09 03:49:34.152164 | orchestrator | 2026-04-09 03:49:34.152178 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-09 03:49:34.152184 | orchestrator | Thursday 09 April 2026 03:49:03 +0000 (0:00:00.075) 0:01:13.485 ******** 2026-04-09 03:49:34.152190 | orchestrator | 2026-04-09 03:49:34.152197 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-09 03:49:34.152203 | orchestrator | Thursday 09 April 2026 03:49:03 +0000 (0:00:00.271) 0:01:13.757 ******** 2026-04-09 03:49:34.152209 | orchestrator | 2026-04-09 03:49:34.152215 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-09 03:49:34.152220 | orchestrator | Thursday 09 April 2026 03:49:03 +0000 (0:00:00.081) 0:01:13.838 ******** 2026-04-09 03:49:34.152226 | orchestrator | 2026-04-09 03:49:34.152233 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-09 03:49:34.152239 | orchestrator | Thursday 09 April 2026 03:49:03 +0000 (0:00:00.082) 0:01:13.920 ******** 2026-04-09 03:49:34.152245 | orchestrator | 2026-04-09 03:49:34.152251 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-04-09 03:49:34.152258 | orchestrator | Thursday 09 April 2026 03:49:03 +0000 (0:00:00.081) 0:01:14.002 ******** 2026-04-09 03:49:34.152264 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:49:34.152270 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:49:34.152275 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:49:34.152281 | orchestrator | 2026-04-09 03:49:34.152287 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-04-09 03:49:34.152294 | orchestrator | Thursday 09 April 2026 03:49:14 +0000 (0:00:10.683) 0:01:24.685 ******** 2026-04-09 03:49:34.152300 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:49:34.152307 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:49:34.152313 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:49:34.152319 | orchestrator | 2026-04-09 03:49:34.152325 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-04-09 03:49:34.152331 | orchestrator | Thursday 09 April 2026 03:49:22 +0000 (0:00:07.822) 0:01:32.508 ******** 2026-04-09 03:49:34.152337 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:49:34.152343 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:49:34.152349 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:49:34.152356 | orchestrator | 2026-04-09 03:49:34.152362 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 03:49:34.152370 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-09 03:49:34.152379 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 03:49:34.152394 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 03:49:34.697792 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-09 03:49:34.697918 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-09 03:49:34.697937 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-09 03:49:34.697953 | orchestrator | 2026-04-09 03:49:34.697968 | orchestrator | 2026-04-09 03:49:34.698106 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 03:49:34.698119 | orchestrator | Thursday 09 April 2026 03:49:34 +0000 (0:00:11.823) 0:01:44.332 ******** 2026-04-09 03:49:34.698130 | orchestrator | =============================================================================== 2026-04-09 03:49:34.698141 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 11.82s 2026-04-09 03:49:34.698181 | orchestrator | ceilometer : Restart ceilometer-notification container ----------------- 10.68s 2026-04-09 03:49:34.698193 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 7.82s 2026-04-09 03:49:34.698204 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 5.59s 2026-04-09 03:49:34.698215 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.62s 2026-04-09 03:49:34.698225 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 3.97s 2026-04-09 03:49:34.698236 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 3.85s 2026-04-09 03:49:34.698247 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.80s 2026-04-09 03:49:34.698258 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.20s 2026-04-09 03:49:34.698269 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.71s 2026-04-09 03:49:34.698279 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.66s 2026-04-09 03:49:34.698290 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.31s 2026-04-09 03:49:34.698301 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.93s 2026-04-09 03:49:34.698313 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.82s 2026-04-09 03:49:34.698324 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.82s 2026-04-09 03:49:34.698335 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.79s 2026-04-09 03:49:34.698348 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.78s 2026-04-09 03:49:34.698360 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 1.69s 2026-04-09 03:49:34.698373 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.66s 2026-04-09 03:49:34.698385 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.62s 2026-04-09 03:49:37.358650 | orchestrator | 2026-04-09 03:49:37 | INFO  | Task f3840b4d-a4dd-46c4-837d-53fa0365ae35 (aodh) was prepared for execution. 2026-04-09 03:49:37.358732 | orchestrator | 2026-04-09 03:49:37 | INFO  | It takes a moment until task f3840b4d-a4dd-46c4-837d-53fa0365ae35 (aodh) has been started and output is visible here. 2026-04-09 03:50:10.791100 | orchestrator | 2026-04-09 03:50:10.791243 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 03:50:10.791271 | orchestrator | 2026-04-09 03:50:10.791289 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 03:50:10.791308 | orchestrator | Thursday 09 April 2026 03:49:42 +0000 (0:00:00.296) 0:00:00.296 ******** 2026-04-09 03:50:10.791326 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:50:10.791343 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:50:10.791360 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:50:10.791377 | orchestrator | 2026-04-09 03:50:10.791395 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 03:50:10.791413 | orchestrator | Thursday 09 April 2026 03:49:42 +0000 (0:00:00.348) 0:00:00.645 ******** 2026-04-09 03:50:10.791433 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-04-09 03:50:10.791452 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-04-09 03:50:10.791468 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-04-09 03:50:10.791478 | orchestrator | 2026-04-09 03:50:10.791488 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-04-09 03:50:10.791498 | orchestrator | 2026-04-09 03:50:10.791508 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-09 03:50:10.791518 | orchestrator | Thursday 09 April 2026 03:49:42 +0000 (0:00:00.482) 0:00:01.127 ******** 2026-04-09 03:50:10.791528 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:50:10.791541 | orchestrator | 2026-04-09 03:50:10.791580 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-04-09 03:50:10.791592 | orchestrator | Thursday 09 April 2026 03:49:43 +0000 (0:00:00.619) 0:00:01.746 ******** 2026-04-09 03:50:10.791603 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-04-09 03:50:10.791614 | orchestrator | 2026-04-09 03:50:10.791626 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-04-09 03:50:10.791636 | orchestrator | Thursday 09 April 2026 03:49:47 +0000 (0:00:03.630) 0:00:05.377 ******** 2026-04-09 03:50:10.791645 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-04-09 03:50:10.791655 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-04-09 03:50:10.791665 | orchestrator | 2026-04-09 03:50:10.791674 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-04-09 03:50:10.791684 | orchestrator | Thursday 09 April 2026 03:49:53 +0000 (0:00:06.694) 0:00:12.072 ******** 2026-04-09 03:50:10.791694 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 03:50:10.791704 | orchestrator | 2026-04-09 03:50:10.791713 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-04-09 03:50:10.791723 | orchestrator | Thursday 09 April 2026 03:49:57 +0000 (0:00:03.647) 0:00:15.720 ******** 2026-04-09 03:50:10.791733 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 03:50:10.791742 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-04-09 03:50:10.791752 | orchestrator | 2026-04-09 03:50:10.791761 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-04-09 03:50:10.791771 | orchestrator | Thursday 09 April 2026 03:50:01 +0000 (0:00:04.017) 0:00:19.738 ******** 2026-04-09 03:50:10.791780 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 03:50:10.791790 | orchestrator | 2026-04-09 03:50:10.791799 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-04-09 03:50:10.791809 | orchestrator | Thursday 09 April 2026 03:50:04 +0000 (0:00:03.369) 0:00:23.107 ******** 2026-04-09 03:50:10.791818 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-04-09 03:50:10.791828 | orchestrator | 2026-04-09 03:50:10.791837 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-04-09 03:50:10.791847 | orchestrator | Thursday 09 April 2026 03:50:08 +0000 (0:00:03.905) 0:00:27.013 ******** 2026-04-09 03:50:10.791860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-09 03:50:10.791897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-09 03:50:10.791917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-09 03:50:10.791927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 03:50:10.791938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 03:50:10.791948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 03:50:10.791958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:10.791976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:12.324216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:12.324321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:12.324338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:12.324350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:12.324363 | orchestrator | 2026-04-09 03:50:12.324376 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-04-09 03:50:12.324389 | orchestrator | Thursday 09 April 2026 03:50:10 +0000 (0:00:02.040) 0:00:29.053 ******** 2026-04-09 03:50:12.324400 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:50:12.324412 | orchestrator | 2026-04-09 03:50:12.324423 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-04-09 03:50:12.324434 | orchestrator | Thursday 09 April 2026 03:50:10 +0000 (0:00:00.146) 0:00:29.200 ******** 2026-04-09 03:50:12.324445 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:50:12.324456 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:50:12.324467 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:50:12.324478 | orchestrator | 2026-04-09 03:50:12.324489 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-04-09 03:50:12.324499 | orchestrator | Thursday 09 April 2026 03:50:11 +0000 (0:00:00.552) 0:00:29.753 ******** 2026-04-09 03:50:12.324512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-09 03:50:12.324566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 03:50:12.324580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 03:50:12.324592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 03:50:12.324603 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:50:12.324615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-09 03:50:12.324626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 03:50:12.324638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 03:50:12.324667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 03:50:17.509454 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:50:17.509562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-09 03:50:17.509580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 03:50:17.509593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 03:50:17.509604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 03:50:17.509614 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:50:17.509624 | orchestrator | 2026-04-09 03:50:17.509635 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-09 03:50:17.509646 | orchestrator | Thursday 09 April 2026 03:50:12 +0000 (0:00:00.836) 0:00:30.590 ******** 2026-04-09 03:50:17.509682 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:50:17.509694 | orchestrator | 2026-04-09 03:50:17.509703 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-04-09 03:50:17.509713 | orchestrator | Thursday 09 April 2026 03:50:13 +0000 (0:00:00.800) 0:00:31.390 ******** 2026-04-09 03:50:17.509723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-09 03:50:17.509752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-09 03:50:17.509763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-09 03:50:17.509774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 03:50:17.509784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 03:50:17.509802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 03:50:17.509812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:17.509830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:18.174205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:18.174271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:18.174277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:18.174281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:18.174300 | orchestrator | 2026-04-09 03:50:18.174306 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-04-09 03:50:18.174311 | orchestrator | Thursday 09 April 2026 03:50:17 +0000 (0:00:04.385) 0:00:35.775 ******** 2026-04-09 03:50:18.174317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-09 03:50:18.174322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 03:50:18.174338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 03:50:18.174343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 03:50:18.174347 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:50:18.174352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-09 03:50:18.174362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 03:50:18.174366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 03:50:18.174370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 03:50:18.174374 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:50:18.174382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-09 03:50:19.292427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 03:50:19.293396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 03:50:19.293455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 03:50:19.293468 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:50:19.293481 | orchestrator | 2026-04-09 03:50:19.293491 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-04-09 03:50:19.293502 | orchestrator | Thursday 09 April 2026 03:50:18 +0000 (0:00:00.668) 0:00:36.444 ******** 2026-04-09 03:50:19.293513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-09 03:50:19.293524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 03:50:19.293534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 03:50:19.293565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 03:50:19.293583 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:50:19.293593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-09 03:50:19.293603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 03:50:19.293613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 03:50:19.293623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 03:50:19.293632 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:50:19.293650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-09 03:50:23.593570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 03:50:23.593713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 03:50:23.593732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 03:50:23.593746 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:50:23.593761 | orchestrator | 2026-04-09 03:50:23.593776 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-04-09 03:50:23.593790 | orchestrator | Thursday 09 April 2026 03:50:19 +0000 (0:00:01.113) 0:00:37.557 ******** 2026-04-09 03:50:23.593804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-09 03:50:23.593819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-09 03:50:23.593855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-09 03:50:23.593885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 03:50:23.593918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 03:50:23.593933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 03:50:23.593946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:23.593960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:23.593974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:23.594004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:32.650230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:32.650340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:32.650358 | orchestrator | 2026-04-09 03:50:32.650371 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-04-09 03:50:32.650385 | orchestrator | Thursday 09 April 2026 03:50:23 +0000 (0:00:04.298) 0:00:41.856 ******** 2026-04-09 03:50:32.650397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-09 03:50:32.650410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-09 03:50:32.650422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-09 03:50:32.650478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 03:50:32.650491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 03:50:32.650503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 03:50:32.650514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:32.650526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:32.650537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:32.650556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:32.650576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:37.828578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:37.828654 | orchestrator | 2026-04-09 03:50:37.828662 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-04-09 03:50:37.828668 | orchestrator | Thursday 09 April 2026 03:50:32 +0000 (0:00:09.053) 0:00:50.909 ******** 2026-04-09 03:50:37.828672 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:50:37.828677 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:50:37.828681 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:50:37.828684 | orchestrator | 2026-04-09 03:50:37.828689 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-04-09 03:50:37.828693 | orchestrator | Thursday 09 April 2026 03:50:34 +0000 (0:00:01.880) 0:00:52.790 ******** 2026-04-09 03:50:37.828698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-09 03:50:37.828703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-09 03:50:37.828724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-09 03:50:37.828740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 03:50:37.828745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 03:50:37.828749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 03:50:37.828753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:37.828757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:37.828765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:37.828769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 03:50:37.828777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 03:51:24.183462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 03:51:24.183574 | orchestrator | 2026-04-09 03:51:24.183586 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-09 03:51:24.183594 | orchestrator | Thursday 09 April 2026 03:50:37 +0000 (0:00:03.302) 0:00:56.093 ******** 2026-04-09 03:51:24.183601 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:51:24.183609 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:51:24.183615 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:51:24.183622 | orchestrator | 2026-04-09 03:51:24.183628 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-04-09 03:51:24.183635 | orchestrator | Thursday 09 April 2026 03:50:38 +0000 (0:00:00.361) 0:00:56.454 ******** 2026-04-09 03:51:24.183642 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:51:24.183648 | orchestrator | 2026-04-09 03:51:24.183654 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-04-09 03:51:24.183660 | orchestrator | Thursday 09 April 2026 03:50:40 +0000 (0:00:02.181) 0:00:58.636 ******** 2026-04-09 03:51:24.183666 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:51:24.183693 | orchestrator | 2026-04-09 03:51:24.183700 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-04-09 03:51:24.183706 | orchestrator | Thursday 09 April 2026 03:50:42 +0000 (0:00:02.435) 0:01:01.071 ******** 2026-04-09 03:51:24.183712 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:51:24.183719 | orchestrator | 2026-04-09 03:51:24.183725 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-09 03:51:24.183731 | orchestrator | Thursday 09 April 2026 03:50:55 +0000 (0:00:13.065) 0:01:14.136 ******** 2026-04-09 03:51:24.183740 | orchestrator | 2026-04-09 03:51:24.183750 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-09 03:51:24.183759 | orchestrator | Thursday 09 April 2026 03:50:55 +0000 (0:00:00.076) 0:01:14.213 ******** 2026-04-09 03:51:24.183769 | orchestrator | 2026-04-09 03:51:24.183779 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-09 03:51:24.183789 | orchestrator | Thursday 09 April 2026 03:50:56 +0000 (0:00:00.073) 0:01:14.287 ******** 2026-04-09 03:51:24.183800 | orchestrator | 2026-04-09 03:51:24.183810 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-04-09 03:51:24.183820 | orchestrator | Thursday 09 April 2026 03:50:56 +0000 (0:00:00.276) 0:01:14.564 ******** 2026-04-09 03:51:24.183830 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:51:24.183840 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:51:24.183851 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:51:24.183860 | orchestrator | 2026-04-09 03:51:24.183870 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-04-09 03:51:24.183880 | orchestrator | Thursday 09 April 2026 03:51:02 +0000 (0:00:05.934) 0:01:20.498 ******** 2026-04-09 03:51:24.183890 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:51:24.183900 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:51:24.183910 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:51:24.183918 | orchestrator | 2026-04-09 03:51:24.183929 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-04-09 03:51:24.183939 | orchestrator | Thursday 09 April 2026 03:51:07 +0000 (0:00:05.371) 0:01:25.870 ******** 2026-04-09 03:51:24.183949 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:51:24.183959 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:51:24.183970 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:51:24.183980 | orchestrator | 2026-04-09 03:51:24.183992 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-04-09 03:51:24.184002 | orchestrator | Thursday 09 April 2026 03:51:18 +0000 (0:00:10.502) 0:01:36.372 ******** 2026-04-09 03:51:24.184013 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:51:24.184024 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:51:24.184035 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:51:24.184047 | orchestrator | 2026-04-09 03:51:24.184058 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 03:51:24.184070 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 03:51:24.184082 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 03:51:24.184093 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 03:51:24.184103 | orchestrator | 2026-04-09 03:51:24.184110 | orchestrator | 2026-04-09 03:51:24.184116 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 03:51:24.184123 | orchestrator | Thursday 09 April 2026 03:51:23 +0000 (0:00:05.633) 0:01:42.006 ******** 2026-04-09 03:51:24.184129 | orchestrator | =============================================================================== 2026-04-09 03:51:24.184135 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 13.07s 2026-04-09 03:51:24.184180 | orchestrator | aodh : Restart aodh-listener container --------------------------------- 10.50s 2026-04-09 03:51:24.184203 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 9.05s 2026-04-09 03:51:24.184209 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.69s 2026-04-09 03:51:24.184216 | orchestrator | aodh : Restart aodh-api container --------------------------------------- 5.93s 2026-04-09 03:51:24.184222 | orchestrator | aodh : Restart aodh-notifier container ---------------------------------- 5.63s 2026-04-09 03:51:24.184228 | orchestrator | aodh : Restart aodh-evaluator container --------------------------------- 5.37s 2026-04-09 03:51:24.184234 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.39s 2026-04-09 03:51:24.184241 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.30s 2026-04-09 03:51:24.184247 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 4.02s 2026-04-09 03:51:24.184253 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.91s 2026-04-09 03:51:24.184259 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.65s 2026-04-09 03:51:24.184265 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.63s 2026-04-09 03:51:24.184271 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.37s 2026-04-09 03:51:24.184277 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.30s 2026-04-09 03:51:24.184283 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.44s 2026-04-09 03:51:24.184289 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.18s 2026-04-09 03:51:24.184295 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 2.04s 2026-04-09 03:51:24.184302 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.88s 2026-04-09 03:51:24.184308 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 1.11s 2026-04-09 03:51:26.757780 | orchestrator | 2026-04-09 03:51:26 | INFO  | Task 838c2a90-348e-48d0-83ec-c6fd7474e7b8 (kolla-ceph-rgw) was prepared for execution. 2026-04-09 03:51:26.758327 | orchestrator | 2026-04-09 03:51:26 | INFO  | It takes a moment until task 838c2a90-348e-48d0-83ec-c6fd7474e7b8 (kolla-ceph-rgw) has been started and output is visible here. 2026-04-09 03:52:05.229534 | orchestrator | 2026-04-09 03:52:05.229625 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 03:52:05.229635 | orchestrator | 2026-04-09 03:52:05.229642 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 03:52:05.229649 | orchestrator | Thursday 09 April 2026 03:51:31 +0000 (0:00:00.324) 0:00:00.324 ******** 2026-04-09 03:52:05.229656 | orchestrator | ok: [testbed-manager] 2026-04-09 03:52:05.229664 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:52:05.229671 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:52:05.229677 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:52:05.229683 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:52:05.229689 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:52:05.229696 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:52:05.229702 | orchestrator | 2026-04-09 03:52:05.229709 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 03:52:05.229715 | orchestrator | Thursday 09 April 2026 03:51:32 +0000 (0:00:00.927) 0:00:01.252 ******** 2026-04-09 03:52:05.229722 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-04-09 03:52:05.229729 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-04-09 03:52:05.229735 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-04-09 03:52:05.229742 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-04-09 03:52:05.229748 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-04-09 03:52:05.229754 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-04-09 03:52:05.229760 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-04-09 03:52:05.229786 | orchestrator | 2026-04-09 03:52:05.229793 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-09 03:52:05.229800 | orchestrator | 2026-04-09 03:52:05.229806 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-04-09 03:52:05.229812 | orchestrator | Thursday 09 April 2026 03:51:33 +0000 (0:00:00.816) 0:00:02.069 ******** 2026-04-09 03:52:05.229819 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:52:05.229826 | orchestrator | 2026-04-09 03:52:05.229832 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-04-09 03:52:05.229838 | orchestrator | Thursday 09 April 2026 03:51:34 +0000 (0:00:01.673) 0:00:03.742 ******** 2026-04-09 03:52:05.229845 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-04-09 03:52:05.229851 | orchestrator | 2026-04-09 03:52:05.229858 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-04-09 03:52:05.229864 | orchestrator | Thursday 09 April 2026 03:51:38 +0000 (0:00:04.059) 0:00:07.801 ******** 2026-04-09 03:52:05.229871 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-04-09 03:52:05.229879 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-04-09 03:52:05.229885 | orchestrator | 2026-04-09 03:52:05.229891 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-04-09 03:52:05.229897 | orchestrator | Thursday 09 April 2026 03:51:45 +0000 (0:00:06.928) 0:00:14.730 ******** 2026-04-09 03:52:05.229904 | orchestrator | ok: [testbed-manager] => (item=service) 2026-04-09 03:52:05.229910 | orchestrator | 2026-04-09 03:52:05.229916 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-04-09 03:52:05.229922 | orchestrator | Thursday 09 April 2026 03:51:49 +0000 (0:00:03.429) 0:00:18.160 ******** 2026-04-09 03:52:05.229928 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 03:52:05.229943 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-04-09 03:52:05.229953 | orchestrator | 2026-04-09 03:52:05.229964 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-04-09 03:52:05.229975 | orchestrator | Thursday 09 April 2026 03:51:53 +0000 (0:00:03.898) 0:00:22.058 ******** 2026-04-09 03:52:05.229985 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-04-09 03:52:05.229995 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-04-09 03:52:05.230005 | orchestrator | 2026-04-09 03:52:05.230062 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-04-09 03:52:05.230073 | orchestrator | Thursday 09 April 2026 03:51:59 +0000 (0:00:06.440) 0:00:28.499 ******** 2026-04-09 03:52:05.230084 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-04-09 03:52:05.230095 | orchestrator | 2026-04-09 03:52:05.230106 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 03:52:05.230116 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 03:52:05.230128 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 03:52:05.230139 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 03:52:05.230150 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 03:52:05.230162 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 03:52:05.230200 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 03:52:05.230213 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 03:52:05.230225 | orchestrator | 2026-04-09 03:52:05.230234 | orchestrator | 2026-04-09 03:52:05.230242 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 03:52:05.230250 | orchestrator | Thursday 09 April 2026 03:52:04 +0000 (0:00:05.074) 0:00:33.574 ******** 2026-04-09 03:52:05.230257 | orchestrator | =============================================================================== 2026-04-09 03:52:05.230265 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.93s 2026-04-09 03:52:05.230296 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.44s 2026-04-09 03:52:05.230304 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.07s 2026-04-09 03:52:05.230311 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.06s 2026-04-09 03:52:05.230318 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.90s 2026-04-09 03:52:05.230326 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.43s 2026-04-09 03:52:05.230333 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.67s 2026-04-09 03:52:05.230340 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.93s 2026-04-09 03:52:05.230347 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2026-04-09 03:52:07.804636 | orchestrator | 2026-04-09 03:52:07 | INFO  | Task dcb77c13-fedf-4175-8d91-01015c5b77a8 (gnocchi) was prepared for execution. 2026-04-09 03:52:07.804743 | orchestrator | 2026-04-09 03:52:07 | INFO  | It takes a moment until task dcb77c13-fedf-4175-8d91-01015c5b77a8 (gnocchi) has been started and output is visible here. 2026-04-09 03:52:13.535676 | orchestrator | 2026-04-09 03:52:13.535776 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 03:52:13.535786 | orchestrator | 2026-04-09 03:52:13.535793 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 03:52:13.535800 | orchestrator | Thursday 09 April 2026 03:52:12 +0000 (0:00:00.293) 0:00:00.293 ******** 2026-04-09 03:52:13.535806 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:52:13.535814 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:52:13.535820 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:52:13.535827 | orchestrator | 2026-04-09 03:52:13.535833 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 03:52:13.535840 | orchestrator | Thursday 09 April 2026 03:52:12 +0000 (0:00:00.390) 0:00:00.683 ******** 2026-04-09 03:52:13.535846 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-04-09 03:52:13.535853 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-04-09 03:52:13.535861 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-04-09 03:52:13.535868 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-04-09 03:52:13.535875 | orchestrator | 2026-04-09 03:52:13.535882 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-04-09 03:52:13.535889 | orchestrator | skipping: no hosts matched 2026-04-09 03:52:13.535897 | orchestrator | 2026-04-09 03:52:13.535903 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 03:52:13.535911 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 03:52:13.535919 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 03:52:13.535925 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 03:52:13.535958 | orchestrator | 2026-04-09 03:52:13.535964 | orchestrator | 2026-04-09 03:52:13.535971 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 03:52:13.535977 | orchestrator | Thursday 09 April 2026 03:52:13 +0000 (0:00:00.419) 0:00:01.102 ******** 2026-04-09 03:52:13.535982 | orchestrator | =============================================================================== 2026-04-09 03:52:13.535988 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2026-04-09 03:52:13.535994 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.39s 2026-04-09 03:52:16.348245 | orchestrator | 2026-04-09 03:52:16 | INFO  | Task 5489844a-6205-44d4-b785-3e94bee2a3c5 (manila) was prepared for execution. 2026-04-09 03:52:16.348382 | orchestrator | 2026-04-09 03:52:16 | INFO  | It takes a moment until task 5489844a-6205-44d4-b785-3e94bee2a3c5 (manila) has been started and output is visible here. 2026-04-09 03:52:58.845448 | orchestrator | 2026-04-09 03:52:58.845559 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 03:52:58.845575 | orchestrator | 2026-04-09 03:52:58.845587 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 03:52:58.845599 | orchestrator | Thursday 09 April 2026 03:52:20 +0000 (0:00:00.300) 0:00:00.300 ******** 2026-04-09 03:52:58.845610 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:52:58.845621 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:52:58.845632 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:52:58.845643 | orchestrator | 2026-04-09 03:52:58.845653 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 03:52:58.845664 | orchestrator | Thursday 09 April 2026 03:52:21 +0000 (0:00:00.366) 0:00:00.666 ******** 2026-04-09 03:52:58.845674 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-04-09 03:52:58.845684 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-04-09 03:52:58.845694 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-04-09 03:52:58.845705 | orchestrator | 2026-04-09 03:52:58.845716 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-04-09 03:52:58.845727 | orchestrator | 2026-04-09 03:52:58.845738 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-09 03:52:58.845748 | orchestrator | Thursday 09 April 2026 03:52:21 +0000 (0:00:00.469) 0:00:01.136 ******** 2026-04-09 03:52:58.845775 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:52:58.845787 | orchestrator | 2026-04-09 03:52:58.845799 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-09 03:52:58.845809 | orchestrator | Thursday 09 April 2026 03:52:22 +0000 (0:00:00.584) 0:00:01.720 ******** 2026-04-09 03:52:58.845819 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:52:58.845831 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:52:58.845841 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:52:58.845852 | orchestrator | 2026-04-09 03:52:58.845862 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-04-09 03:52:58.845872 | orchestrator | Thursday 09 April 2026 03:52:22 +0000 (0:00:00.490) 0:00:02.211 ******** 2026-04-09 03:52:58.845882 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-04-09 03:52:58.845893 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-04-09 03:52:58.845903 | orchestrator | 2026-04-09 03:52:58.845913 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-04-09 03:52:58.845923 | orchestrator | Thursday 09 April 2026 03:52:29 +0000 (0:00:06.680) 0:00:08.892 ******** 2026-04-09 03:52:58.845933 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-04-09 03:52:58.845944 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-04-09 03:52:58.845982 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-04-09 03:52:58.845994 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-04-09 03:52:58.846004 | orchestrator | 2026-04-09 03:52:58.846075 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-04-09 03:52:58.846087 | orchestrator | Thursday 09 April 2026 03:52:42 +0000 (0:00:12.812) 0:00:21.705 ******** 2026-04-09 03:52:58.846099 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 03:52:58.846111 | orchestrator | 2026-04-09 03:52:58.846122 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-04-09 03:52:58.846133 | orchestrator | Thursday 09 April 2026 03:52:45 +0000 (0:00:03.226) 0:00:24.931 ******** 2026-04-09 03:52:58.846144 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 03:52:58.846156 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-04-09 03:52:58.846167 | orchestrator | 2026-04-09 03:52:58.846179 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-04-09 03:52:58.846190 | orchestrator | Thursday 09 April 2026 03:52:49 +0000 (0:00:03.814) 0:00:28.746 ******** 2026-04-09 03:52:58.846201 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 03:52:58.846213 | orchestrator | 2026-04-09 03:52:58.846224 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-04-09 03:52:58.846236 | orchestrator | Thursday 09 April 2026 03:52:52 +0000 (0:00:03.162) 0:00:31.908 ******** 2026-04-09 03:52:58.846247 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-04-09 03:52:58.846259 | orchestrator | 2026-04-09 03:52:58.846270 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-04-09 03:52:58.846281 | orchestrator | Thursday 09 April 2026 03:52:56 +0000 (0:00:03.807) 0:00:35.715 ******** 2026-04-09 03:52:58.846320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-09 03:52:58.846367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-09 03:52:58.846381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-09 03:52:58.846403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:52:58.846417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:52:58.846428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:52:58.846448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:10.001755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:10.001853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:10.001877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:10.001882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:10.001886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:10.001890 | orchestrator | 2026-04-09 03:53:10.001896 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-09 03:53:10.001901 | orchestrator | Thursday 09 April 2026 03:52:58 +0000 (0:00:02.589) 0:00:38.305 ******** 2026-04-09 03:53:10.001905 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:53:10.001909 | orchestrator | 2026-04-09 03:53:10.001913 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-04-09 03:53:10.001917 | orchestrator | Thursday 09 April 2026 03:52:59 +0000 (0:00:00.696) 0:00:39.001 ******** 2026-04-09 03:53:10.001921 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:53:10.001928 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:53:10.001934 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:53:10.001939 | orchestrator | 2026-04-09 03:53:10.001944 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-04-09 03:53:10.001950 | orchestrator | Thursday 09 April 2026 03:53:00 +0000 (0:00:01.081) 0:00:40.083 ******** 2026-04-09 03:53:10.001960 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-09 03:53:10.001982 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-09 03:53:10.001989 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-09 03:53:10.002001 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-09 03:53:10.002007 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-09 03:53:10.002060 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-09 03:53:10.002069 | orchestrator | 2026-04-09 03:53:10.002075 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-04-09 03:53:10.002082 | orchestrator | Thursday 09 April 2026 03:53:02 +0000 (0:00:01.901) 0:00:41.985 ******** 2026-04-09 03:53:10.002089 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-09 03:53:10.002097 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-09 03:53:10.002104 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-09 03:53:10.002110 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-09 03:53:10.002117 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-09 03:53:10.002124 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-09 03:53:10.002131 | orchestrator | 2026-04-09 03:53:10.002139 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-04-09 03:53:10.002146 | orchestrator | Thursday 09 April 2026 03:53:03 +0000 (0:00:01.266) 0:00:43.251 ******** 2026-04-09 03:53:10.002154 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-04-09 03:53:10.002162 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-04-09 03:53:10.002168 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-04-09 03:53:10.002175 | orchestrator | 2026-04-09 03:53:10.002182 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-04-09 03:53:10.002190 | orchestrator | Thursday 09 April 2026 03:53:04 +0000 (0:00:00.741) 0:00:43.992 ******** 2026-04-09 03:53:10.002197 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:53:10.002204 | orchestrator | 2026-04-09 03:53:10.002211 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-04-09 03:53:10.002218 | orchestrator | Thursday 09 April 2026 03:53:04 +0000 (0:00:00.143) 0:00:44.136 ******** 2026-04-09 03:53:10.002226 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:53:10.002233 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:53:10.002240 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:53:10.002247 | orchestrator | 2026-04-09 03:53:10.002254 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-09 03:53:10.002261 | orchestrator | Thursday 09 April 2026 03:53:05 +0000 (0:00:00.577) 0:00:44.714 ******** 2026-04-09 03:53:10.002269 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 03:53:10.002276 | orchestrator | 2026-04-09 03:53:10.002284 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-04-09 03:53:10.002291 | orchestrator | Thursday 09 April 2026 03:53:05 +0000 (0:00:00.623) 0:00:45.337 ******** 2026-04-09 03:53:10.002312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-09 03:53:10.985806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-09 03:53:10.985901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-09 03:53:10.985914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:10.985925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:10.985934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:10.985979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:10.985996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:10.986006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:10.986069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:10.986080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:10.986089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:10.986106 | orchestrator | 2026-04-09 03:53:10.986117 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-04-09 03:53:10.986127 | orchestrator | Thursday 09 April 2026 03:53:10 +0000 (0:00:04.137) 0:00:49.474 ******** 2026-04-09 03:53:10.986144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-09 03:53:11.894948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:53:11.895035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 03:53:11.895049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 03:53:11.895058 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:53:11.895069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-09 03:53:11.895099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:53:11.895109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 03:53:11.895137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 03:53:11.895146 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:53:11.895154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-09 03:53:11.895163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:53:11.895172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 03:53:11.895186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 03:53:11.895194 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:53:11.895202 | orchestrator | 2026-04-09 03:53:11.895211 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-04-09 03:53:11.895220 | orchestrator | Thursday 09 April 2026 03:53:11 +0000 (0:00:01.045) 0:00:50.520 ******** 2026-04-09 03:53:11.895239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-09 03:53:16.675082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:53:16.675186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 03:53:16.675201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 03:53:16.675233 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:53:16.675250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-09 03:53:16.675265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:53:16.675280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 03:53:16.675329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 03:53:16.675340 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:53:16.675349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-09 03:53:16.675365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:53:16.675373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 03:53:16.675475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 03:53:16.675485 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:53:16.675493 | orchestrator | 2026-04-09 03:53:16.675502 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-04-09 03:53:16.675513 | orchestrator | Thursday 09 April 2026 03:53:12 +0000 (0:00:01.115) 0:00:51.635 ******** 2026-04-09 03:53:16.675536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-09 03:53:24.153530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-09 03:53:24.153623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-09 03:53:24.153631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:24.153638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:24.153642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:24.153666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:24.153672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:24.153681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:24.153685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:24.153689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:24.153693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:24.153697 | orchestrator | 2026-04-09 03:53:24.153702 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-04-09 03:53:24.153708 | orchestrator | Thursday 09 April 2026 03:53:17 +0000 (0:00:04.773) 0:00:56.409 ******** 2026-04-09 03:53:24.153718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-09 03:53:28.911833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-09 03:53:28.911983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-09 03:53:28.912002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:28.912015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 03:53:28.912040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:28.912069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 03:53:28.912087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:28.912097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 03:53:28.912108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:28.912119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:28.912129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 03:53:28.912139 | orchestrator | 2026-04-09 03:53:28.912152 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-04-09 03:53:28.912163 | orchestrator | Thursday 09 April 2026 03:53:24 +0000 (0:00:07.216) 0:01:03.625 ******** 2026-04-09 03:53:28.912179 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-04-09 03:53:28.912190 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-04-09 03:53:28.912199 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-04-09 03:53:28.912209 | orchestrator | 2026-04-09 03:53:28.912219 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-04-09 03:53:28.912235 | orchestrator | Thursday 09 April 2026 03:53:28 +0000 (0:00:04.047) 0:01:07.673 ******** 2026-04-09 03:53:28.912253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-09 03:53:32.285207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:53:32.285296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 03:53:32.285308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 03:53:32.285317 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:53:32.285327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-09 03:53:32.285348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:53:32.285374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 03:53:32.285394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 03:53:32.285402 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:53:32.285447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-09 03:53:32.285457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 03:53:32.285464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 03:53:32.285482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 03:53:32.285489 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:53:32.285509 | orchestrator | 2026-04-09 03:53:32.285525 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-04-09 03:53:32.285534 | orchestrator | Thursday 09 April 2026 03:53:28 +0000 (0:00:00.700) 0:01:08.374 ******** 2026-04-09 03:53:32.285548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-09 03:54:13.991953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-09 03:54:13.992076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-09 03:54:13.992097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:54:13.992163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:54:13.992181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 03:54:13.992212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-09 03:54:13.992226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-09 03:54:13.992238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-09 03:54:13.992250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 03:54:13.992277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 03:54:13.992291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 03:54:13.992303 | orchestrator | 2026-04-09 03:54:13.992316 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-04-09 03:54:13.992329 | orchestrator | Thursday 09 April 2026 03:53:32 +0000 (0:00:03.368) 0:01:11.742 ******** 2026-04-09 03:54:13.992341 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:54:13.992354 | orchestrator | 2026-04-09 03:54:13.992364 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-04-09 03:54:13.992377 | orchestrator | Thursday 09 April 2026 03:53:34 +0000 (0:00:02.055) 0:01:13.798 ******** 2026-04-09 03:54:13.992385 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:54:13.992391 | orchestrator | 2026-04-09 03:54:13.992398 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-04-09 03:54:13.992404 | orchestrator | Thursday 09 April 2026 03:53:36 +0000 (0:00:02.240) 0:01:16.038 ******** 2026-04-09 03:54:13.992411 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:54:13.992417 | orchestrator | 2026-04-09 03:54:13.992424 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-09 03:54:13.992430 | orchestrator | Thursday 09 April 2026 03:54:13 +0000 (0:00:37.085) 0:01:53.124 ******** 2026-04-09 03:54:13.992437 | orchestrator | 2026-04-09 03:54:13.992450 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-09 03:54:59.832640 | orchestrator | Thursday 09 April 2026 03:54:13 +0000 (0:00:00.074) 0:01:53.198 ******** 2026-04-09 03:54:59.832752 | orchestrator | 2026-04-09 03:54:59.832768 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-09 03:54:59.832779 | orchestrator | Thursday 09 April 2026 03:54:13 +0000 (0:00:00.074) 0:01:53.272 ******** 2026-04-09 03:54:59.832789 | orchestrator | 2026-04-09 03:54:59.832799 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-04-09 03:54:59.832809 | orchestrator | Thursday 09 April 2026 03:54:13 +0000 (0:00:00.074) 0:01:53.347 ******** 2026-04-09 03:54:59.832819 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:54:59.832829 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:54:59.832839 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:54:59.832849 | orchestrator | 2026-04-09 03:54:59.832858 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-04-09 03:54:59.832868 | orchestrator | Thursday 09 April 2026 03:54:28 +0000 (0:00:14.953) 0:02:08.301 ******** 2026-04-09 03:54:59.832878 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:54:59.832888 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:54:59.832897 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:54:59.832907 | orchestrator | 2026-04-09 03:54:59.832916 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-04-09 03:54:59.832951 | orchestrator | Thursday 09 April 2026 03:54:35 +0000 (0:00:06.605) 0:02:14.907 ******** 2026-04-09 03:54:59.832962 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:54:59.832971 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:54:59.832981 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:54:59.832990 | orchestrator | 2026-04-09 03:54:59.833000 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-04-09 03:54:59.833009 | orchestrator | Thursday 09 April 2026 03:54:46 +0000 (0:00:10.945) 0:02:25.852 ******** 2026-04-09 03:54:59.833019 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:54:59.833028 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:54:59.833038 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:54:59.833047 | orchestrator | 2026-04-09 03:54:59.833057 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 03:54:59.833068 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 03:54:59.833079 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 03:54:59.833088 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 03:54:59.833098 | orchestrator | 2026-04-09 03:54:59.833107 | orchestrator | 2026-04-09 03:54:59.833117 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 03:54:59.833126 | orchestrator | Thursday 09 April 2026 03:54:59 +0000 (0:00:12.814) 0:02:38.667 ******** 2026-04-09 03:54:59.833136 | orchestrator | =============================================================================== 2026-04-09 03:54:59.833145 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 37.09s 2026-04-09 03:54:59.833155 | orchestrator | manila : Restart manila-api container ---------------------------------- 14.95s 2026-04-09 03:54:59.833164 | orchestrator | manila : Restart manila-share container -------------------------------- 12.81s 2026-04-09 03:54:59.833174 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 12.81s 2026-04-09 03:54:59.833183 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 10.95s 2026-04-09 03:54:59.833206 | orchestrator | manila : Copying over manila.conf --------------------------------------- 7.22s 2026-04-09 03:54:59.833216 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.68s 2026-04-09 03:54:59.833225 | orchestrator | manila : Restart manila-data container ---------------------------------- 6.61s 2026-04-09 03:54:59.833234 | orchestrator | manila : Copying over config.json files for services -------------------- 4.77s 2026-04-09 03:54:59.833244 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 4.14s 2026-04-09 03:54:59.833253 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 4.05s 2026-04-09 03:54:59.833263 | orchestrator | service-ks-register : manila | Creating users --------------------------- 3.81s 2026-04-09 03:54:59.833272 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 3.81s 2026-04-09 03:54:59.833282 | orchestrator | manila : Check manila containers ---------------------------------------- 3.37s 2026-04-09 03:54:59.833292 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.23s 2026-04-09 03:54:59.833302 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.16s 2026-04-09 03:54:59.833311 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.59s 2026-04-09 03:54:59.833321 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.24s 2026-04-09 03:54:59.833330 | orchestrator | manila : Creating Manila database --------------------------------------- 2.06s 2026-04-09 03:54:59.833340 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.90s 2026-04-09 03:55:00.235912 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-04-09 03:55:12.686070 | orchestrator | 2026-04-09 03:55:12 | INFO  | Task d62fffa5-831e-4fa7-ad5c-2566d8faac99 (netdata) was prepared for execution. 2026-04-09 03:55:12.686166 | orchestrator | 2026-04-09 03:55:12 | INFO  | It takes a moment until task d62fffa5-831e-4fa7-ad5c-2566d8faac99 (netdata) has been started and output is visible here. 2026-04-09 03:56:49.884114 | orchestrator | 2026-04-09 03:56:49.884250 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 03:56:49.884274 | orchestrator | 2026-04-09 03:56:49.884289 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 03:56:49.884304 | orchestrator | Thursday 09 April 2026 03:55:17 +0000 (0:00:00.262) 0:00:00.262 ******** 2026-04-09 03:56:49.884320 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-09 03:56:49.884336 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-09 03:56:49.884351 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-09 03:56:49.884366 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-09 03:56:49.884382 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-09 03:56:49.884397 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-09 03:56:49.884412 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-09 03:56:49.884429 | orchestrator | 2026-04-09 03:56:49.884445 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-09 03:56:49.884461 | orchestrator | 2026-04-09 03:56:49.884474 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-09 03:56:49.884483 | orchestrator | Thursday 09 April 2026 03:55:18 +0000 (0:00:00.934) 0:00:01.197 ******** 2026-04-09 03:56:49.884495 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:56:49.884506 | orchestrator | 2026-04-09 03:56:49.884531 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-09 03:56:49.884549 | orchestrator | Thursday 09 April 2026 03:55:20 +0000 (0:00:01.428) 0:00:02.626 ******** 2026-04-09 03:56:49.884558 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:56:49.884568 | orchestrator | ok: [testbed-manager] 2026-04-09 03:56:49.884577 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:56:49.884586 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:56:49.884595 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:56:49.884603 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:56:49.884612 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:56:49.884621 | orchestrator | 2026-04-09 03:56:49.884630 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-09 03:56:49.884639 | orchestrator | Thursday 09 April 2026 03:55:22 +0000 (0:00:01.897) 0:00:04.524 ******** 2026-04-09 03:56:49.884647 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:56:49.884658 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:56:49.884672 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:56:49.884684 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:56:49.884723 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:56:49.884736 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:56:49.884748 | orchestrator | ok: [testbed-manager] 2026-04-09 03:56:49.884828 | orchestrator | 2026-04-09 03:56:49.884842 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-09 03:56:49.884855 | orchestrator | Thursday 09 April 2026 03:55:24 +0000 (0:00:02.474) 0:00:06.998 ******** 2026-04-09 03:56:49.884867 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:56:49.884878 | orchestrator | changed: [testbed-manager] 2026-04-09 03:56:49.884889 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:56:49.884900 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:56:49.884938 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:56:49.884949 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:56:49.884960 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:56:49.884971 | orchestrator | 2026-04-09 03:56:49.884982 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-09 03:56:49.885008 | orchestrator | Thursday 09 April 2026 03:55:26 +0000 (0:00:01.674) 0:00:08.672 ******** 2026-04-09 03:56:49.885019 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:56:49.885030 | orchestrator | changed: [testbed-manager] 2026-04-09 03:56:49.885041 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:56:49.885051 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:56:49.885062 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:56:49.885073 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:56:49.885083 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:56:49.885094 | orchestrator | 2026-04-09 03:56:49.885105 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-09 03:56:49.885116 | orchestrator | Thursday 09 April 2026 03:55:41 +0000 (0:00:15.017) 0:00:23.690 ******** 2026-04-09 03:56:49.885127 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:56:49.885137 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:56:49.885148 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:56:49.885159 | orchestrator | changed: [testbed-manager] 2026-04-09 03:56:49.885169 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:56:49.885180 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:56:49.885191 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:56:49.885202 | orchestrator | 2026-04-09 03:56:49.885213 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-09 03:56:49.885224 | orchestrator | Thursday 09 April 2026 03:56:23 +0000 (0:00:41.795) 0:01:05.485 ******** 2026-04-09 03:56:49.885236 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:56:49.885249 | orchestrator | 2026-04-09 03:56:49.885260 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-09 03:56:49.885271 | orchestrator | Thursday 09 April 2026 03:56:24 +0000 (0:00:01.796) 0:01:07.281 ******** 2026-04-09 03:56:49.885282 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-09 03:56:49.885293 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-09 03:56:49.885304 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-09 03:56:49.885315 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-09 03:56:49.885353 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-09 03:56:49.885371 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-09 03:56:49.885388 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-09 03:56:49.885406 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-09 03:56:49.885425 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-09 03:56:49.885444 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-09 03:56:49.885462 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-09 03:56:49.885475 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-09 03:56:49.885486 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-09 03:56:49.885496 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-09 03:56:49.885507 | orchestrator | 2026-04-09 03:56:49.885518 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-09 03:56:49.885536 | orchestrator | Thursday 09 April 2026 03:56:28 +0000 (0:00:03.974) 0:01:11.256 ******** 2026-04-09 03:56:49.885553 | orchestrator | ok: [testbed-manager] 2026-04-09 03:56:49.885571 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:56:49.885590 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:56:49.885602 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:56:49.885624 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:56:49.885634 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:56:49.885645 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:56:49.885655 | orchestrator | 2026-04-09 03:56:49.885666 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-09 03:56:49.885677 | orchestrator | Thursday 09 April 2026 03:56:30 +0000 (0:00:01.445) 0:01:12.701 ******** 2026-04-09 03:56:49.885712 | orchestrator | changed: [testbed-manager] 2026-04-09 03:56:49.885725 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:56:49.885736 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:56:49.885747 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:56:49.885758 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:56:49.885768 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:56:49.885779 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:56:49.885790 | orchestrator | 2026-04-09 03:56:49.885805 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-09 03:56:49.885823 | orchestrator | Thursday 09 April 2026 03:56:31 +0000 (0:00:01.441) 0:01:14.143 ******** 2026-04-09 03:56:49.885841 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:56:49.885858 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:56:49.885877 | orchestrator | ok: [testbed-manager] 2026-04-09 03:56:49.885897 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:56:49.885909 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:56:49.885919 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:56:49.885930 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:56:49.885940 | orchestrator | 2026-04-09 03:56:49.885951 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-09 03:56:49.885962 | orchestrator | Thursday 09 April 2026 03:56:33 +0000 (0:00:01.322) 0:01:15.465 ******** 2026-04-09 03:56:49.885973 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:56:49.885983 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:56:49.885994 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:56:49.886005 | orchestrator | ok: [testbed-manager] 2026-04-09 03:56:49.886084 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:56:49.886098 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:56:49.886109 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:56:49.886120 | orchestrator | 2026-04-09 03:56:49.886130 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-09 03:56:49.886141 | orchestrator | Thursday 09 April 2026 03:56:34 +0000 (0:00:01.698) 0:01:17.164 ******** 2026-04-09 03:56:49.886152 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-09 03:56:49.886175 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:56:49.886187 | orchestrator | 2026-04-09 03:56:49.886197 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-09 03:56:49.886208 | orchestrator | Thursday 09 April 2026 03:56:36 +0000 (0:00:01.569) 0:01:18.734 ******** 2026-04-09 03:56:49.886219 | orchestrator | changed: [testbed-manager] 2026-04-09 03:56:49.886230 | orchestrator | 2026-04-09 03:56:49.886241 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-09 03:56:49.886252 | orchestrator | Thursday 09 April 2026 03:56:38 +0000 (0:00:02.291) 0:01:21.025 ******** 2026-04-09 03:56:49.886262 | orchestrator | changed: [testbed-node-3] 2026-04-09 03:56:49.886273 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:56:49.886284 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:56:49.886295 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:56:49.886305 | orchestrator | changed: [testbed-node-4] 2026-04-09 03:56:49.886316 | orchestrator | changed: [testbed-node-5] 2026-04-09 03:56:49.886328 | orchestrator | changed: [testbed-manager] 2026-04-09 03:56:49.886346 | orchestrator | 2026-04-09 03:56:49.886364 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 03:56:49.886419 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 03:56:49.886438 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 03:56:49.886456 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 03:56:49.886472 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 03:56:49.886506 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 03:56:50.380134 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 03:56:50.380269 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 03:56:50.380287 | orchestrator | 2026-04-09 03:56:50.380296 | orchestrator | 2026-04-09 03:56:50.380304 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 03:56:50.380313 | orchestrator | Thursday 09 April 2026 03:56:49 +0000 (0:00:11.221) 0:01:32.247 ******** 2026-04-09 03:56:50.380321 | orchestrator | =============================================================================== 2026-04-09 03:56:50.380328 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 41.80s 2026-04-09 03:56:50.380336 | orchestrator | osism.services.netdata : Add repository -------------------------------- 15.02s 2026-04-09 03:56:50.380343 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.22s 2026-04-09 03:56:50.380351 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.97s 2026-04-09 03:56:50.380358 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.47s 2026-04-09 03:56:50.380365 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.29s 2026-04-09 03:56:50.380372 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.90s 2026-04-09 03:56:50.380380 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.80s 2026-04-09 03:56:50.380387 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.70s 2026-04-09 03:56:50.380394 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.67s 2026-04-09 03:56:50.380401 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.57s 2026-04-09 03:56:50.380408 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.45s 2026-04-09 03:56:50.380417 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.44s 2026-04-09 03:56:50.380424 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.43s 2026-04-09 03:56:50.380431 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.32s 2026-04-09 03:56:50.380438 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.93s 2026-04-09 03:56:53.206388 | orchestrator | 2026-04-09 03:56:53 | INFO  | Task 9f288f1c-40fc-4b97-b8b5-2232d2444e8e (prometheus) was prepared for execution. 2026-04-09 03:56:53.206488 | orchestrator | 2026-04-09 03:56:53 | INFO  | It takes a moment until task 9f288f1c-40fc-4b97-b8b5-2232d2444e8e (prometheus) has been started and output is visible here. 2026-04-09 03:57:04.200921 | orchestrator | 2026-04-09 03:57:04.201007 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 03:57:04.201015 | orchestrator | 2026-04-09 03:57:04.201020 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 03:57:04.201045 | orchestrator | Thursday 09 April 2026 03:56:58 +0000 (0:00:00.340) 0:00:00.340 ******** 2026-04-09 03:57:04.201051 | orchestrator | ok: [testbed-manager] 2026-04-09 03:57:04.201056 | orchestrator | ok: [testbed-node-0] 2026-04-09 03:57:04.201071 | orchestrator | ok: [testbed-node-1] 2026-04-09 03:57:04.201076 | orchestrator | ok: [testbed-node-2] 2026-04-09 03:57:04.201080 | orchestrator | ok: [testbed-node-3] 2026-04-09 03:57:04.201085 | orchestrator | ok: [testbed-node-4] 2026-04-09 03:57:04.201089 | orchestrator | ok: [testbed-node-5] 2026-04-09 03:57:04.201094 | orchestrator | 2026-04-09 03:57:04.201099 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 03:57:04.201103 | orchestrator | Thursday 09 April 2026 03:56:59 +0000 (0:00:00.952) 0:00:01.292 ******** 2026-04-09 03:57:04.201108 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-09 03:57:04.201113 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-09 03:57:04.201118 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-09 03:57:04.201122 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-09 03:57:04.201127 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-09 03:57:04.201131 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-09 03:57:04.201135 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-09 03:57:04.201140 | orchestrator | 2026-04-09 03:57:04.201144 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-09 03:57:04.201149 | orchestrator | 2026-04-09 03:57:04.201153 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-09 03:57:04.201158 | orchestrator | Thursday 09 April 2026 03:57:00 +0000 (0:00:01.401) 0:00:02.694 ******** 2026-04-09 03:57:04.201162 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:57:04.201168 | orchestrator | 2026-04-09 03:57:04.201173 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-09 03:57:04.201177 | orchestrator | Thursday 09 April 2026 03:57:02 +0000 (0:00:01.553) 0:00:04.248 ******** 2026-04-09 03:57:04.201185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:57:04.201193 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-09 03:57:04.201199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:57:04.201209 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:57:04.201228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:04.201234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:04.201239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:57:04.201243 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:57:04.201248 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:57:04.201254 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:57:04.201260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:04.201272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:05.110349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:05.110445 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:57:05.110463 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:57:05.110474 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-09 03:57:05.110486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:57:05.110514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:57:05.110540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:05.110557 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:57:05.110567 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 03:57:05.110576 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 03:57:05.110585 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:05.110594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:05.110609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:57:05.110618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:05.110639 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:57:10.466985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:10.467066 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 03:57:10.467073 | orchestrator | 2026-04-09 03:57:10.467080 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-09 03:57:10.467086 | orchestrator | Thursday 09 April 2026 03:57:05 +0000 (0:00:03.007) 0:00:07.255 ******** 2026-04-09 03:57:10.467092 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 03:57:10.467098 | orchestrator | 2026-04-09 03:57:10.467103 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-09 03:57:10.467107 | orchestrator | Thursday 09 April 2026 03:57:06 +0000 (0:00:01.841) 0:00:09.096 ******** 2026-04-09 03:57:10.467111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:57:10.467131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:57:10.467138 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-09 03:57:10.467144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:57:10.467169 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:57:10.467174 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:57:10.467178 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:57:10.467183 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:57:10.467192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:10.467196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:10.467201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:10.467206 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:57:10.467220 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:57:12.405329 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:57:12.405425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:12.405435 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:57:12.405463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:12.405470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:12.405477 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 03:57:12.405499 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 03:57:12.405519 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 03:57:12.405527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:57:12.405535 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-09 03:57:12.405545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:57:12.405550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:57:12.405553 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:12.405562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:12.405575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:13.499571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:13.499687 | orchestrator | 2026-04-09 03:57:13.499701 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-09 03:57:13.499784 | orchestrator | Thursday 09 April 2026 03:57:12 +0000 (0:00:05.448) 0:00:14.545 ******** 2026-04-09 03:57:13.499795 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-09 03:57:13.499803 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 03:57:13.499810 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 03:57:13.499834 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-09 03:57:13.499859 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 03:57:13.499866 | orchestrator | skipping: [testbed-manager] 2026-04-09 03:57:13.499874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 03:57:13.499889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 03:57:13.499895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 03:57:13.499902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 03:57:13.499908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 03:57:13.499914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 03:57:13.499921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 03:57:13.499932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 03:57:14.298341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 03:57:14.298415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 03:57:14.298422 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:57:14.298428 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:57:14.298432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 03:57:14.298463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 03:57:14.298469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 03:57:14.298489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 03:57:14.298496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 03:57:14.298521 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:57:14.298545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 03:57:14.298553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 03:57:14.298559 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 03:57:14.298566 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:57:14.298572 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 03:57:14.298580 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 03:57:14.298587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 03:57:14.298594 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:57:14.298605 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 03:57:14.298621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 03:57:15.961476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 03:57:15.961585 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:57:15.961600 | orchestrator | 2026-04-09 03:57:15.961608 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-09 03:57:15.961613 | orchestrator | Thursday 09 April 2026 03:57:14 +0000 (0:00:01.898) 0:00:16.443 ******** 2026-04-09 03:57:15.961620 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-09 03:57:15.961634 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 03:57:15.961641 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 03:57:15.961658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 03:57:15.961690 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-09 03:57:15.961698 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 03:57:15.961705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 03:57:15.961766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 03:57:15.961775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 03:57:15.961782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 03:57:15.961795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 03:57:15.961809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 03:57:15.961813 | orchestrator | skipping: [testbed-manager] 2026-04-09 03:57:15.961822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 03:57:17.168579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 03:57:17.168697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 03:57:17.168788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 03:57:17.168811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 03:57:17.168830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 03:57:17.168945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 03:57:17.168965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 03:57:17.168978 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:57:17.168992 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:57:17.169003 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:57:17.169038 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 03:57:17.169050 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 03:57:17.169061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 03:57:17.169073 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:57:17.169084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 03:57:17.169096 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 03:57:17.169125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 03:57:17.169140 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:57:17.169152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 03:57:17.169173 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 03:57:20.948291 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 03:57:20.948413 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:57:20.948437 | orchestrator | 2026-04-09 03:57:20.948456 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-09 03:57:20.948475 | orchestrator | Thursday 09 April 2026 03:57:17 +0000 (0:00:02.864) 0:00:19.307 ******** 2026-04-09 03:57:20.948495 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-09 03:57:20.948516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:57:20.948568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:57:20.948606 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:57:20.948617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:57:20.948645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:57:20.948655 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:57:20.948664 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:57:20.948673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:20.948690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:20.948700 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:57:20.948776 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:57:20.948788 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:57:20.948805 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:57:24.038804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:24.038903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:24.038936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:24.038945 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 03:57:24.038968 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 03:57:24.038977 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 03:57:24.038984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:24.039010 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-09 03:57:24.039021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:57:24.039036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:57:24.039044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:57:24.039055 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:24.039063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:24.039071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:24.039086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:57:28.434891 | orchestrator | 2026-04-09 03:57:28.435029 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-09 03:57:28.435054 | orchestrator | Thursday 09 April 2026 03:57:24 +0000 (0:00:06.869) 0:00:26.177 ******** 2026-04-09 03:57:28.435071 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 03:57:28.435089 | orchestrator | 2026-04-09 03:57:28.435107 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-09 03:57:28.435154 | orchestrator | Thursday 09 April 2026 03:57:25 +0000 (0:00:01.007) 0:00:27.185 ******** 2026-04-09 03:57:28.435167 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1362997, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3120785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:28.435179 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1362997, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3120785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:28.435189 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1363364, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.4490561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:28.435214 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1362997, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3120785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:28.435223 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1362997, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3120785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 03:57:28.435232 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1362989, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3109958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:28.435260 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1363364, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.4490561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:28.435278 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1362997, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3120785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:28.435287 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1362997, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3120785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:28.435296 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1362997, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3120785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:28.435310 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1363069, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3672366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:28.435319 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1363364, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.4490561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:28.435328 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1362989, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3109958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:28.435342 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1363364, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.4490561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:30.392895 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1363364, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.4490561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:30.392969 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1363364, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.4490561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:30.392977 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1363069, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3672366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:30.393000 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1363364, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.4490561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 03:57:30.393005 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1362989, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3109958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:30.393009 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1362984, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.308631, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:30.393027 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1362989, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3109958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:30.393043 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1362989, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3109958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:30.393047 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1363069, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3672366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:30.393051 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1362989, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3109958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:30.393059 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1363002, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3128295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:30.393063 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1362984, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.308631, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:30.393067 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1363069, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3672366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:30.393075 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1362984, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.308631, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:30.393083 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1363069, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3672366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:32.591671 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1363069, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3672366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:32.591830 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1363002, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3128295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:32.591860 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1362984, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.308631, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:32.591868 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1363067, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.339959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:32.591875 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1363002, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3128295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:32.591898 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1362984, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.308631, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:32.591905 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1362989, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3109958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 03:57:32.591928 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1363002, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3128295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:32.591934 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1363067, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.339959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:32.591945 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1362984, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.308631, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:32.591952 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1363067, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.339959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:32.591964 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1363007, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3131807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:32.591971 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1363067, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.339959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:32.591977 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1363002, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3128295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:32.591988 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1363007, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3131807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:34.647426 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1362995, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3109958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:34.647552 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1363007, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3131807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:34.647571 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1363007, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3131807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:34.647604 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1362995, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3109958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:34.647615 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1363002, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3128295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:34.647625 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1362995, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3109958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:34.647637 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1363069, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3672366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 03:57:34.647665 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1363361, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.4459608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:34.647681 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1363067, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.339959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:34.647692 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1362995, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3109958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:34.647709 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1363067, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.339959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:34.647719 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1363361, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.4459608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:34.647866 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1363361, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.4459608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:34.647878 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1362976, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.307368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:34.647897 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1363361, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.4459608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:36.427180 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1363007, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3131807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:36.427288 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1363389, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.454961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:36.427328 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1363007, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3131807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:36.427338 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1362976, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.307368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:36.427345 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1362984, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.308631, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 03:57:36.427352 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1362976, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.307368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:36.427359 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1362976, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.307368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:36.427386 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1363389, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.454961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:36.427399 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1362995, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3109958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:36.427406 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1363351, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.4455101, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:36.427414 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1362995, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3109958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:36.427420 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1363389, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.454961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:36.427426 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1363389, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.454961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:36.427433 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1363351, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.4455101, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:36.427448 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1363351, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.4455101, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:38.306386 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1363361, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.4459608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:38.306470 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1362986, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3090808, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:38.306479 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1363361, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.4459608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:38.306486 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1363351, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.4455101, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:38.306494 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1363002, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3128295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 03:57:38.306500 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1362986, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3090808, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:38.306520 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1362986, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3090808, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:38.306555 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1362981, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3080778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:38.306562 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1362986, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3090808, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:38.306569 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1362976, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.307368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:38.306576 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1362976, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.307368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:38.306583 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1362981, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3080778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:38.306589 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1362981, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3080778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:38.306604 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1363389, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.454961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:38.306616 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1362981, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3080778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:40.027920 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1363064, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.339959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:40.028022 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1363389, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.454961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:40.028035 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1363351, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.4455101, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:40.028046 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1363064, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.339959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:40.028053 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1363064, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.339959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:40.028118 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1363011, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3390152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:40.028128 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1363064, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.339959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:40.028151 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1363067, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.339959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 03:57:40.028161 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1363351, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.4455101, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:40.028169 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1362986, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3090808, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:40.028178 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1363011, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3390152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:40.028186 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1363385, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.454644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:40.028201 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:57:40.028215 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1363011, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3390152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:40.028224 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1363385, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.454644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:40.028232 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:57:40.028245 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1362986, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3090808, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:47.189642 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1363011, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3390152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:47.189768 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1362981, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3080778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:47.189777 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1363385, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.454644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:47.189805 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:57:47.189812 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1363385, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.454644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:47.189816 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:57:47.189831 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1362981, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3080778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:47.189835 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1363064, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.339959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:47.189850 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1363064, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.339959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:47.189854 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1363007, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3131807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 03:57:47.189858 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1363011, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3390152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:47.189862 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1363011, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3390152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:47.189869 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1363385, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.454644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:47.189873 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:57:47.189880 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1363385, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.454644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 03:57:47.189884 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:57:47.189888 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1362995, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3109958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 03:57:47.189895 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1363361, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.4459608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 03:58:15.106504 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1362976, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.307368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 03:58:15.106643 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1363389, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.454961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 03:58:15.106704 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1363351, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.4455101, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 03:58:15.106725 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1362986, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3090808, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 03:58:15.106788 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1362981, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3080778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 03:58:15.106811 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1363064, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.339959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 03:58:15.106830 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1363011, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3390152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 03:58:15.106871 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1363385, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.454644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 03:58:15.106890 | orchestrator | 2026-04-09 03:58:15.106911 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-09 03:58:15.106931 | orchestrator | Thursday 09 April 2026 03:57:53 +0000 (0:00:28.861) 0:00:56.047 ******** 2026-04-09 03:58:15.106950 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 03:58:15.106970 | orchestrator | 2026-04-09 03:58:15.106988 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-09 03:58:15.107023 | orchestrator | Thursday 09 April 2026 03:57:54 +0000 (0:00:00.846) 0:00:56.894 ******** 2026-04-09 03:58:15.107041 | orchestrator | [WARNING]: Skipped 2026-04-09 03:58:15.107061 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 03:58:15.107082 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-09 03:58:15.107100 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 03:58:15.107120 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-09 03:58:15.107138 | orchestrator | [WARNING]: Skipped 2026-04-09 03:58:15.107156 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 03:58:15.107174 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-09 03:58:15.107193 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 03:58:15.107211 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-09 03:58:15.107229 | orchestrator | [WARNING]: Skipped 2026-04-09 03:58:15.107248 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 03:58:15.107267 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-09 03:58:15.107287 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 03:58:15.107305 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-09 03:58:15.107323 | orchestrator | [WARNING]: Skipped 2026-04-09 03:58:15.107340 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 03:58:15.107358 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-09 03:58:15.107377 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 03:58:15.107395 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-09 03:58:15.107414 | orchestrator | [WARNING]: Skipped 2026-04-09 03:58:15.107433 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 03:58:15.107450 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-09 03:58:15.107469 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 03:58:15.107487 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-09 03:58:15.107506 | orchestrator | [WARNING]: Skipped 2026-04-09 03:58:15.107523 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 03:58:15.107542 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-09 03:58:15.107560 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 03:58:15.107588 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-09 03:58:15.107608 | orchestrator | [WARNING]: Skipped 2026-04-09 03:58:15.107626 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 03:58:15.107644 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-09 03:58:15.107662 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 03:58:15.107680 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-09 03:58:15.107699 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 03:58:15.107718 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 03:58:15.107735 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 03:58:15.107772 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 03:58:15.107791 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 03:58:15.107806 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 03:58:15.107822 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 03:58:15.107838 | orchestrator | 2026-04-09 03:58:15.107854 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-09 03:58:15.107882 | orchestrator | Thursday 09 April 2026 03:57:56 +0000 (0:00:02.213) 0:00:59.107 ******** 2026-04-09 03:58:15.107899 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 03:58:15.108090 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 03:58:15.108113 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:58:15.108130 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:58:15.108145 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 03:58:15.108161 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:58:15.108278 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 03:58:33.669748 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:58:33.669990 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 03:58:33.670009 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:58:33.670083 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 03:58:33.670135 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:58:33.670148 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-09 03:58:33.670160 | orchestrator | 2026-04-09 03:58:33.670173 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-09 03:58:33.670185 | orchestrator | Thursday 09 April 2026 03:58:15 +0000 (0:00:18.130) 0:01:17.238 ******** 2026-04-09 03:58:33.670198 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 03:58:33.670209 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:58:33.670221 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 03:58:33.670235 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:58:33.670249 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 03:58:33.670262 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:58:33.670276 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 03:58:33.670288 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:58:33.670301 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 03:58:33.670315 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:58:33.670328 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 03:58:33.670341 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:58:33.670355 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-09 03:58:33.670368 | orchestrator | 2026-04-09 03:58:33.670381 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-09 03:58:33.670395 | orchestrator | Thursday 09 April 2026 03:58:18 +0000 (0:00:03.229) 0:01:20.467 ******** 2026-04-09 03:58:33.670408 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 03:58:33.670423 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:58:33.670438 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 03:58:33.670452 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:58:33.670464 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 03:58:33.670477 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:58:33.670490 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 03:58:33.670536 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:58:33.670550 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-09 03:58:33.670563 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 03:58:33.670595 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:58:33.670609 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 03:58:33.670622 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:58:33.670634 | orchestrator | 2026-04-09 03:58:33.670645 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-09 03:58:33.670656 | orchestrator | Thursday 09 April 2026 03:58:20 +0000 (0:00:01.877) 0:01:22.344 ******** 2026-04-09 03:58:33.670667 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 03:58:33.670678 | orchestrator | 2026-04-09 03:58:33.670689 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-09 03:58:33.670701 | orchestrator | Thursday 09 April 2026 03:58:21 +0000 (0:00:00.872) 0:01:23.217 ******** 2026-04-09 03:58:33.670712 | orchestrator | skipping: [testbed-manager] 2026-04-09 03:58:33.670723 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:58:33.670733 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:58:33.670744 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:58:33.670755 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:58:33.670796 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:58:33.670809 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:58:33.670819 | orchestrator | 2026-04-09 03:58:33.670830 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-09 03:58:33.670841 | orchestrator | Thursday 09 April 2026 03:58:21 +0000 (0:00:00.821) 0:01:24.038 ******** 2026-04-09 03:58:33.670852 | orchestrator | skipping: [testbed-manager] 2026-04-09 03:58:33.670863 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:58:33.670873 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:58:33.670884 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:58:33.670895 | orchestrator | changed: [testbed-node-0] 2026-04-09 03:58:33.670906 | orchestrator | changed: [testbed-node-1] 2026-04-09 03:58:33.670917 | orchestrator | changed: [testbed-node-2] 2026-04-09 03:58:33.670927 | orchestrator | 2026-04-09 03:58:33.670939 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-09 03:58:33.670974 | orchestrator | Thursday 09 April 2026 03:58:24 +0000 (0:00:02.492) 0:01:26.531 ******** 2026-04-09 03:58:33.670985 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 03:58:33.670997 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 03:58:33.671008 | orchestrator | skipping: [testbed-manager] 2026-04-09 03:58:33.671019 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 03:58:33.671029 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 03:58:33.671040 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:58:33.671051 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:58:33.671062 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:58:33.671073 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 03:58:33.671084 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:58:33.671095 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 03:58:33.671106 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:58:33.671116 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 03:58:33.671136 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:58:33.671148 | orchestrator | 2026-04-09 03:58:33.671159 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-09 03:58:33.671170 | orchestrator | Thursday 09 April 2026 03:58:26 +0000 (0:00:01.676) 0:01:28.208 ******** 2026-04-09 03:58:33.671180 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 03:58:33.671192 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 03:58:33.671203 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:58:33.671213 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:58:33.671224 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 03:58:33.671234 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:58:33.671245 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 03:58:33.671256 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:58:33.671266 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 03:58:33.671277 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:58:33.671288 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 03:58:33.671298 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:58:33.671309 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-09 03:58:33.671320 | orchestrator | 2026-04-09 03:58:33.671331 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-09 03:58:33.671341 | orchestrator | Thursday 09 April 2026 03:58:27 +0000 (0:00:01.563) 0:01:29.771 ******** 2026-04-09 03:58:33.671352 | orchestrator | [WARNING]: Skipped 2026-04-09 03:58:33.671365 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-09 03:58:33.671376 | orchestrator | due to this access issue: 2026-04-09 03:58:33.671386 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-09 03:58:33.671397 | orchestrator | not a directory 2026-04-09 03:58:33.671414 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 03:58:33.671425 | orchestrator | 2026-04-09 03:58:33.671436 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-09 03:58:33.671447 | orchestrator | Thursday 09 April 2026 03:58:28 +0000 (0:00:01.221) 0:01:30.993 ******** 2026-04-09 03:58:33.671457 | orchestrator | skipping: [testbed-manager] 2026-04-09 03:58:33.671468 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:58:33.671479 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:58:33.671490 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:58:33.671501 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:58:33.671511 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:58:33.671522 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:58:33.671533 | orchestrator | 2026-04-09 03:58:33.671543 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-09 03:58:33.671555 | orchestrator | Thursday 09 April 2026 03:58:29 +0000 (0:00:01.048) 0:01:32.041 ******** 2026-04-09 03:58:33.671565 | orchestrator | skipping: [testbed-manager] 2026-04-09 03:58:33.671576 | orchestrator | skipping: [testbed-node-0] 2026-04-09 03:58:33.671586 | orchestrator | skipping: [testbed-node-1] 2026-04-09 03:58:33.671597 | orchestrator | skipping: [testbed-node-2] 2026-04-09 03:58:33.671608 | orchestrator | skipping: [testbed-node-3] 2026-04-09 03:58:33.671618 | orchestrator | skipping: [testbed-node-4] 2026-04-09 03:58:33.671629 | orchestrator | skipping: [testbed-node-5] 2026-04-09 03:58:33.671640 | orchestrator | 2026-04-09 03:58:33.671650 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-04-09 03:58:33.671669 | orchestrator | Thursday 09 April 2026 03:58:30 +0000 (0:00:01.031) 0:01:33.073 ******** 2026-04-09 03:58:33.671693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:58:35.377215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:58:35.377307 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-09 03:58:35.377319 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:58:35.377327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:58:35.377364 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:58:35.377372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:58:35.377398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:58:35.377416 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:58:35.377420 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 03:58:35.377424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:58:35.377429 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:58:35.377435 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:58:35.377442 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:58:35.377451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:58:35.377456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:58:35.377463 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:58:37.631113 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 03:58:37.631219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:58:37.631235 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 03:58:37.631266 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 03:58:37.631279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:58:37.631319 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-09 03:58:37.631350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:58:37.631364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 03:58:37.631377 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:58:37.631389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:58:37.631407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:58:37.631427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 03:58:37.631440 | orchestrator | 2026-04-09 03:58:37.631453 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-09 03:58:37.631467 | orchestrator | Thursday 09 April 2026 03:58:35 +0000 (0:00:04.452) 0:01:37.526 ******** 2026-04-09 03:58:37.631479 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-09 03:58:37.631491 | orchestrator | skipping: [testbed-manager] 2026-04-09 03:58:37.631503 | orchestrator | 2026-04-09 03:58:37.631515 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 03:58:37.631527 | orchestrator | Thursday 09 April 2026 03:58:36 +0000 (0:00:01.400) 0:01:38.926 ******** 2026-04-09 03:58:37.631539 | orchestrator | 2026-04-09 03:58:37.631550 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 03:58:37.631562 | orchestrator | Thursday 09 April 2026 03:58:37 +0000 (0:00:00.294) 0:01:39.221 ******** 2026-04-09 03:58:37.631573 | orchestrator | 2026-04-09 03:58:37.631585 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 03:58:37.631596 | orchestrator | Thursday 09 April 2026 03:58:37 +0000 (0:00:00.082) 0:01:39.303 ******** 2026-04-09 03:58:37.631608 | orchestrator | 2026-04-09 03:58:37.631619 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 03:58:37.631630 | orchestrator | Thursday 09 April 2026 03:58:37 +0000 (0:00:00.087) 0:01:39.391 ******** 2026-04-09 03:58:37.631642 | orchestrator | 2026-04-09 03:58:37.631655 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 03:58:37.631668 | orchestrator | Thursday 09 April 2026 03:58:37 +0000 (0:00:00.088) 0:01:39.479 ******** 2026-04-09 03:58:37.631681 | orchestrator | 2026-04-09 03:58:37.631694 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 03:58:37.631706 | orchestrator | Thursday 09 April 2026 03:58:37 +0000 (0:00:00.082) 0:01:39.562 ******** 2026-04-09 03:58:37.631719 | orchestrator | 2026-04-09 03:58:37.631731 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 03:58:37.631751 | orchestrator | Thursday 09 April 2026 03:58:37 +0000 (0:00:00.106) 0:01:39.668 ******** 2026-04-09 04:00:17.641406 | orchestrator | 2026-04-09 04:00:17.641527 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-09 04:00:17.641544 | orchestrator | Thursday 09 April 2026 03:58:37 +0000 (0:00:00.097) 0:01:39.765 ******** 2026-04-09 04:00:17.641556 | orchestrator | changed: [testbed-manager] 2026-04-09 04:00:17.641566 | orchestrator | 2026-04-09 04:00:17.641573 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-09 04:00:17.641580 | orchestrator | Thursday 09 April 2026 03:58:59 +0000 (0:00:21.684) 0:02:01.450 ******** 2026-04-09 04:00:17.641586 | orchestrator | changed: [testbed-manager] 2026-04-09 04:00:17.641593 | orchestrator | changed: [testbed-node-3] 2026-04-09 04:00:17.641599 | orchestrator | changed: [testbed-node-4] 2026-04-09 04:00:17.641605 | orchestrator | changed: [testbed-node-5] 2026-04-09 04:00:17.641612 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:00:17.641618 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:00:17.641625 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:00:17.641631 | orchestrator | 2026-04-09 04:00:17.641637 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-09 04:00:17.641644 | orchestrator | Thursday 09 April 2026 03:59:14 +0000 (0:00:14.997) 0:02:16.447 ******** 2026-04-09 04:00:17.641672 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:00:17.641679 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:00:17.641685 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:00:17.641691 | orchestrator | 2026-04-09 04:00:17.641698 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-09 04:00:17.641705 | orchestrator | Thursday 09 April 2026 03:59:25 +0000 (0:00:11.085) 0:02:27.533 ******** 2026-04-09 04:00:17.641711 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:00:17.641717 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:00:17.641723 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:00:17.641729 | orchestrator | 2026-04-09 04:00:17.641736 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-09 04:00:17.641742 | orchestrator | Thursday 09 April 2026 03:59:31 +0000 (0:00:06.018) 0:02:33.552 ******** 2026-04-09 04:00:17.641748 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:00:17.641754 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:00:17.641760 | orchestrator | changed: [testbed-node-5] 2026-04-09 04:00:17.641766 | orchestrator | changed: [testbed-node-4] 2026-04-09 04:00:17.641773 | orchestrator | changed: [testbed-node-3] 2026-04-09 04:00:17.641779 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:00:17.641785 | orchestrator | changed: [testbed-manager] 2026-04-09 04:00:17.641791 | orchestrator | 2026-04-09 04:00:17.641797 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-09 04:00:17.641803 | orchestrator | Thursday 09 April 2026 03:59:45 +0000 (0:00:14.279) 0:02:47.831 ******** 2026-04-09 04:00:17.641809 | orchestrator | changed: [testbed-manager] 2026-04-09 04:00:17.641815 | orchestrator | 2026-04-09 04:00:17.641822 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-09 04:00:17.641943 | orchestrator | Thursday 09 April 2026 03:59:54 +0000 (0:00:08.788) 0:02:56.620 ******** 2026-04-09 04:00:17.641957 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:00:17.641967 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:00:17.641977 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:00:17.641989 | orchestrator | 2026-04-09 04:00:17.642000 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-09 04:00:17.642011 | orchestrator | Thursday 09 April 2026 04:00:05 +0000 (0:00:10.942) 0:03:07.562 ******** 2026-04-09 04:00:17.642073 | orchestrator | changed: [testbed-manager] 2026-04-09 04:00:17.642081 | orchestrator | 2026-04-09 04:00:17.642089 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-09 04:00:17.642097 | orchestrator | Thursday 09 April 2026 04:00:11 +0000 (0:00:05.852) 0:03:13.414 ******** 2026-04-09 04:00:17.642104 | orchestrator | changed: [testbed-node-3] 2026-04-09 04:00:17.642111 | orchestrator | changed: [testbed-node-5] 2026-04-09 04:00:17.642118 | orchestrator | changed: [testbed-node-4] 2026-04-09 04:00:17.642125 | orchestrator | 2026-04-09 04:00:17.642133 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 04:00:17.642142 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-09 04:00:17.642151 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-09 04:00:17.642158 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-09 04:00:17.642165 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-09 04:00:17.642172 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 04:00:17.642179 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 04:00:17.642196 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 04:00:17.642204 | orchestrator | 2026-04-09 04:00:17.642212 | orchestrator | 2026-04-09 04:00:17.642219 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 04:00:17.642226 | orchestrator | Thursday 09 April 2026 04:00:17 +0000 (0:00:05.779) 0:03:19.194 ******** 2026-04-09 04:00:17.642234 | orchestrator | =============================================================================== 2026-04-09 04:00:17.642241 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 28.86s 2026-04-09 04:00:17.642270 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 21.68s 2026-04-09 04:00:17.642278 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 18.13s 2026-04-09 04:00:17.642286 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 15.00s 2026-04-09 04:00:17.642294 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.28s 2026-04-09 04:00:17.642301 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 11.09s 2026-04-09 04:00:17.642309 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.94s 2026-04-09 04:00:17.642315 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.79s 2026-04-09 04:00:17.642321 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.87s 2026-04-09 04:00:17.642327 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 6.02s 2026-04-09 04:00:17.642335 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.85s 2026-04-09 04:00:17.642345 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 5.78s 2026-04-09 04:00:17.642356 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.45s 2026-04-09 04:00:17.642366 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.45s 2026-04-09 04:00:17.642376 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.23s 2026-04-09 04:00:17.642386 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.01s 2026-04-09 04:00:17.642396 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.86s 2026-04-09 04:00:17.642405 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.49s 2026-04-09 04:00:17.642415 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.21s 2026-04-09 04:00:17.642424 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 1.90s 2026-04-09 04:00:20.611722 | orchestrator | 2026-04-09 04:00:20 | INFO  | Task 4a266f14-dbb6-4e2d-9545-f8bc6df5f6ca (grafana) was prepared for execution. 2026-04-09 04:00:20.611877 | orchestrator | 2026-04-09 04:00:20 | INFO  | It takes a moment until task 4a266f14-dbb6-4e2d-9545-f8bc6df5f6ca (grafana) has been started and output is visible here. 2026-04-09 04:00:31.370079 | orchestrator | 2026-04-09 04:00:31.370208 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 04:00:31.370232 | orchestrator | 2026-04-09 04:00:31.370247 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 04:00:31.370262 | orchestrator | Thursday 09 April 2026 04:00:25 +0000 (0:00:00.301) 0:00:00.301 ******** 2026-04-09 04:00:31.370277 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:00:31.370291 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:00:31.370303 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:00:31.370318 | orchestrator | 2026-04-09 04:00:31.370331 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 04:00:31.370345 | orchestrator | Thursday 09 April 2026 04:00:25 +0000 (0:00:00.352) 0:00:00.653 ******** 2026-04-09 04:00:31.370384 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-09 04:00:31.370401 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-09 04:00:31.370415 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-09 04:00:31.370427 | orchestrator | 2026-04-09 04:00:31.370449 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-09 04:00:31.370463 | orchestrator | 2026-04-09 04:00:31.370477 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-09 04:00:31.370495 | orchestrator | Thursday 09 April 2026 04:00:26 +0000 (0:00:00.521) 0:00:01.175 ******** 2026-04-09 04:00:31.370512 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:00:31.370528 | orchestrator | 2026-04-09 04:00:31.370543 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-09 04:00:31.370556 | orchestrator | Thursday 09 April 2026 04:00:26 +0000 (0:00:00.610) 0:00:01.786 ******** 2026-04-09 04:00:31.370574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 04:00:31.370593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 04:00:31.370608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 04:00:31.370622 | orchestrator | 2026-04-09 04:00:31.370636 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-09 04:00:31.370649 | orchestrator | Thursday 09 April 2026 04:00:27 +0000 (0:00:00.964) 0:00:02.750 ******** 2026-04-09 04:00:31.370663 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-04-09 04:00:31.370677 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-04-09 04:00:31.370691 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 04:00:31.370706 | orchestrator | 2026-04-09 04:00:31.370726 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-09 04:00:31.370745 | orchestrator | Thursday 09 April 2026 04:00:28 +0000 (0:00:00.943) 0:00:03.694 ******** 2026-04-09 04:00:31.370762 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:00:31.370795 | orchestrator | 2026-04-09 04:00:31.370810 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-09 04:00:31.370826 | orchestrator | Thursday 09 April 2026 04:00:29 +0000 (0:00:00.649) 0:00:04.344 ******** 2026-04-09 04:00:31.370910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 04:00:31.370927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 04:00:31.370943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 04:00:31.370956 | orchestrator | 2026-04-09 04:00:31.370969 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-09 04:00:31.370983 | orchestrator | Thursday 09 April 2026 04:00:30 +0000 (0:00:01.411) 0:00:05.755 ******** 2026-04-09 04:00:31.370996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-09 04:00:31.371011 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:00:31.371025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-09 04:00:31.371050 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:00:31.371085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-09 04:00:38.547354 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:00:38.547464 | orchestrator | 2026-04-09 04:00:38.547485 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-09 04:00:38.547499 | orchestrator | Thursday 09 April 2026 04:00:31 +0000 (0:00:00.640) 0:00:06.395 ******** 2026-04-09 04:00:38.547513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-09 04:00:38.547529 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:00:38.547542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-09 04:00:38.547555 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:00:38.547568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-09 04:00:38.547580 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:00:38.547592 | orchestrator | 2026-04-09 04:00:38.547605 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-09 04:00:38.547617 | orchestrator | Thursday 09 April 2026 04:00:32 +0000 (0:00:00.667) 0:00:07.063 ******** 2026-04-09 04:00:38.547630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 04:00:38.547681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 04:00:38.547715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 04:00:38.547728 | orchestrator | 2026-04-09 04:00:38.547741 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-09 04:00:38.547753 | orchestrator | Thursday 09 April 2026 04:00:33 +0000 (0:00:01.354) 0:00:08.418 ******** 2026-04-09 04:00:38.547765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 04:00:38.547776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 04:00:38.547788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 04:00:38.547808 | orchestrator | 2026-04-09 04:00:38.547819 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-09 04:00:38.547831 | orchestrator | Thursday 09 April 2026 04:00:34 +0000 (0:00:01.614) 0:00:10.033 ******** 2026-04-09 04:00:38.547906 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:00:38.547920 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:00:38.547932 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:00:38.547944 | orchestrator | 2026-04-09 04:00:38.547956 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-09 04:00:38.547969 | orchestrator | Thursday 09 April 2026 04:00:35 +0000 (0:00:00.352) 0:00:10.386 ******** 2026-04-09 04:00:38.547981 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-09 04:00:38.547995 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-09 04:00:38.548007 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-09 04:00:38.548019 | orchestrator | 2026-04-09 04:00:38.548032 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-09 04:00:38.548044 | orchestrator | Thursday 09 April 2026 04:00:36 +0000 (0:00:01.331) 0:00:11.718 ******** 2026-04-09 04:00:38.548057 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-09 04:00:38.548069 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-09 04:00:38.548089 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-09 04:00:38.548102 | orchestrator | 2026-04-09 04:00:38.548115 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-04-09 04:00:38.548130 | orchestrator | Thursday 09 April 2026 04:00:38 +0000 (0:00:01.848) 0:00:13.566 ******** 2026-04-09 04:00:45.274324 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 04:00:45.274404 | orchestrator | 2026-04-09 04:00:45.274411 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-04-09 04:00:45.274417 | orchestrator | Thursday 09 April 2026 04:00:39 +0000 (0:00:00.850) 0:00:14.416 ******** 2026-04-09 04:00:45.274421 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-04-09 04:00:45.274426 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-04-09 04:00:45.274431 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:00:45.274437 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:00:45.274444 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:00:45.274450 | orchestrator | 2026-04-09 04:00:45.274458 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-04-09 04:00:45.274468 | orchestrator | Thursday 09 April 2026 04:00:40 +0000 (0:00:00.782) 0:00:15.198 ******** 2026-04-09 04:00:45.274474 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:00:45.274480 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:00:45.274487 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:00:45.274493 | orchestrator | 2026-04-09 04:00:45.274499 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-09 04:00:45.274506 | orchestrator | Thursday 09 April 2026 04:00:40 +0000 (0:00:00.403) 0:00:15.602 ******** 2026-04-09 04:00:45.274515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1361756, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699699.932915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:45.274563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1361756, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699699.932915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:45.274570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1361756, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699699.932915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:45.274577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1362687, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1995485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:45.274610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1362687, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1995485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:45.274617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1362687, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1995485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:45.274623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1362371, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1142817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:45.274634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1362371, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1142817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:45.274641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1362371, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1142817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:45.274646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1362688, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2019565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:45.274655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1362688, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2019565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:45.274675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1362688, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2019565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:49.083032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1362522, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1531286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:49.083153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1362522, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1531286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:49.083166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1362522, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1531286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:49.083176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1362554, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1966107, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:49.083186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1362554, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1966107, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:49.083207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1362554, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1966107, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:49.083231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1361754, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699699.9302197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:49.083255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1361754, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699699.9302197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:49.083264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1361754, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699699.9302197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:49.083272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1361761, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.0919547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:49.083281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1361761, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.0919547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:49.083293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1361761, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.0919547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:49.083308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1362381, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1142817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:52.989275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1362381, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1142817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:52.989383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1362381, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1142817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:52.989395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1362540, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1570618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:52.989404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1362540, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1570618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:52.989411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1362540, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1570618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:52.989432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1362685, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1981235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:52.989457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1362685, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1981235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:52.989473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1362685, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1981235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:52.989482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1362356, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.101955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:52.989489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1362356, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.101955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:52.989497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1362356, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.101955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:52.989509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1362549, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1604395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:52.989523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1362549, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1604395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:57.193316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1362549, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1604395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:57.193448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1362530, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1570618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:57.193469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1362530, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1570618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:57.193486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1362530, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1570618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:57.193523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1362513, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1520553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:57.193542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1362513, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1520553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:57.193609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1362513, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1520553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:57.193630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1362507, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1498568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:57.193646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1362507, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1498568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:57.193661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1362507, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1498568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:57.193677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1362541, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1589558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:57.193695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1362541, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1589558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:00:57.193719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1362541, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1589558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:01.324461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1362382, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1487727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:01.324571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1362382, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1487727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:01.324585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1362382, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1487727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:01.324596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1362684, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1969566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:01.324623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1362684, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1969566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:01.324655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1362684, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.1969566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:01.324684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1362965, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3058746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:01.324694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1362965, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3058746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:01.324704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1362965, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3058746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:01.324714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1362714, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2318492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:01.324729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1362714, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2318492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:01.324749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1362714, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2318492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:01.324765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1362703, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2062514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:05.349847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1362703, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2062514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:05.350089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1362703, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2062514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:05.350114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1362777, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2338195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:05.350127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1362777, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2338195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:05.350188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1362777, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2338195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:05.350203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1362698, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.204122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:05.350241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1362698, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.204122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:05.350256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1362698, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.204122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:05.350268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1362894, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.276958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:05.350281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1362894, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.276958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:05.350309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1362894, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.276958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:05.350318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1362778, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2733593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:05.350333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1362778, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2733593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:09.375330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1362778, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2733593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:09.375441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1362898, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.276958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:09.375458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1362898, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.276958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:09.375513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1362898, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.276958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:09.375527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1362958, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3028257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:09.375540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1362958, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3028257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:09.375569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1362958, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.3028257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:09.375582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1362891, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2756338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:09.375595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1362891, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2756338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:09.375658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1362891, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2756338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:09.375673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1362772, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2319572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:09.375685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1362772, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2319572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:09.375707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1362772, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2319572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:13.206999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1362711, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2122946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:13.207075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1362711, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2122946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:13.207107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1362711, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2122946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:13.207114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1362771, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2318492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:13.207121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1362771, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2318492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:13.207127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1362771, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2318492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:13.207147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1362704, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2089567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:13.207155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1362704, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2089567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:13.207167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1362704, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2089567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:13.207177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1362774, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.233314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:13.207185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1362774, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.233314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:13.207190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1362774, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.233314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:13.207198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1362911, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2999582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:17.366522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1362911, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2999582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:17.366671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1362911, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2999582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:17.366707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1362905, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.282111, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:17.366721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1362905, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.282111, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:17.366733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1362905, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.282111, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:17.366745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1362700, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2044303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:17.366778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1362700, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2044303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:17.366831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1362700, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2044303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:17.366850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1362702, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.20564, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:17.367033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1362702, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.20564, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:17.367050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1362702, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.20564, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:17.367061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1362889, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.273958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:01:17.367085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1362889, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.273958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:02:57.306065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1362889, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.273958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:02:57.306188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1362902, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2789578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:02:57.306208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1362902, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2789578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:02:57.307110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1362902, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775699700.2789578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 04:02:57.307149 | orchestrator | 2026-04-09 04:02:57.307163 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-04-09 04:02:57.307176 | orchestrator | Thursday 09 April 2026 04:01:18 +0000 (0:00:38.080) 0:00:53.682 ******** 2026-04-09 04:02:57.307189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 04:02:57.307252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 04:02:57.307267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 04:02:57.307279 | orchestrator | 2026-04-09 04:02:57.307291 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-09 04:02:57.307303 | orchestrator | Thursday 09 April 2026 04:01:19 +0000 (0:00:01.032) 0:00:54.714 ******** 2026-04-09 04:02:57.307315 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:02:57.307328 | orchestrator | 2026-04-09 04:02:57.307339 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-04-09 04:02:57.307346 | orchestrator | Thursday 09 April 2026 04:01:21 +0000 (0:00:02.316) 0:00:57.031 ******** 2026-04-09 04:02:57.307361 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:02:57.307368 | orchestrator | 2026-04-09 04:02:57.307375 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-09 04:02:57.307381 | orchestrator | Thursday 09 April 2026 04:01:24 +0000 (0:00:02.356) 0:00:59.387 ******** 2026-04-09 04:02:57.307388 | orchestrator | 2026-04-09 04:02:57.307394 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-09 04:02:57.307401 | orchestrator | Thursday 09 April 2026 04:01:24 +0000 (0:00:00.077) 0:00:59.465 ******** 2026-04-09 04:02:57.307408 | orchestrator | 2026-04-09 04:02:57.307414 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-09 04:02:57.307435 | orchestrator | Thursday 09 April 2026 04:01:24 +0000 (0:00:00.081) 0:00:59.546 ******** 2026-04-09 04:02:57.307447 | orchestrator | 2026-04-09 04:02:57.307468 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-04-09 04:02:57.307480 | orchestrator | Thursday 09 April 2026 04:01:24 +0000 (0:00:00.076) 0:00:59.623 ******** 2026-04-09 04:02:57.307491 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:02:57.307502 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:02:57.307512 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:02:57.307522 | orchestrator | 2026-04-09 04:02:57.307533 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-04-09 04:02:57.307543 | orchestrator | Thursday 09 April 2026 04:01:26 +0000 (0:00:02.208) 0:01:01.832 ******** 2026-04-09 04:02:57.307553 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:02:57.307563 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:02:57.307574 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-04-09 04:02:57.307587 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-04-09 04:02:57.307610 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-04-09 04:02:57.307621 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-04-09 04:02:57.307631 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:02:57.307644 | orchestrator | 2026-04-09 04:02:57.307655 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-04-09 04:02:57.307666 | orchestrator | Thursday 09 April 2026 04:02:17 +0000 (0:00:50.795) 0:01:52.627 ******** 2026-04-09 04:02:57.307676 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:02:57.307687 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:02:57.307699 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:02:57.307710 | orchestrator | 2026-04-09 04:02:57.307722 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-04-09 04:02:57.307734 | orchestrator | Thursday 09 April 2026 04:02:51 +0000 (0:00:34.317) 0:02:26.945 ******** 2026-04-09 04:02:57.307745 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:02:57.307756 | orchestrator | 2026-04-09 04:02:57.307766 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-04-09 04:02:57.307778 | orchestrator | Thursday 09 April 2026 04:02:54 +0000 (0:00:02.260) 0:02:29.206 ******** 2026-04-09 04:02:57.307789 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:02:57.307800 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:02:57.307812 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:02:57.307823 | orchestrator | 2026-04-09 04:02:57.307834 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-04-09 04:02:57.307845 | orchestrator | Thursday 09 April 2026 04:02:54 +0000 (0:00:00.381) 0:02:29.587 ******** 2026-04-09 04:02:57.307858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-04-09 04:02:57.307886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-04-09 04:02:58.021760 | orchestrator | 2026-04-09 04:02:58.021883 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-04-09 04:02:58.021964 | orchestrator | Thursday 09 April 2026 04:02:57 +0000 (0:00:02.739) 0:02:32.326 ******** 2026-04-09 04:02:58.021983 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:02:58.022000 | orchestrator | 2026-04-09 04:02:58.022083 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 04:02:58.022109 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 04:02:58.022127 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 04:02:58.022143 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 04:02:58.022158 | orchestrator | 2026-04-09 04:02:58.022175 | orchestrator | 2026-04-09 04:02:58.022191 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 04:02:58.022207 | orchestrator | Thursday 09 April 2026 04:02:57 +0000 (0:00:00.327) 0:02:32.654 ******** 2026-04-09 04:02:58.022217 | orchestrator | =============================================================================== 2026-04-09 04:02:58.022243 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 50.80s 2026-04-09 04:02:58.022280 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.08s 2026-04-09 04:02:58.022296 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 34.32s 2026-04-09 04:02:58.022310 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.74s 2026-04-09 04:02:58.022325 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.36s 2026-04-09 04:02:58.022341 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.32s 2026-04-09 04:02:58.022356 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.26s 2026-04-09 04:02:58.022372 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.21s 2026-04-09 04:02:58.022388 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.85s 2026-04-09 04:02:58.022403 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.62s 2026-04-09 04:02:58.022418 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.41s 2026-04-09 04:02:58.022441 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.35s 2026-04-09 04:02:58.022471 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.33s 2026-04-09 04:02:58.022485 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.03s 2026-04-09 04:02:58.022500 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.96s 2026-04-09 04:02:58.022517 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.94s 2026-04-09 04:02:58.022532 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.85s 2026-04-09 04:02:58.022570 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.78s 2026-04-09 04:02:58.022582 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.67s 2026-04-09 04:02:58.022593 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.65s 2026-04-09 04:02:58.383467 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-04-09 04:02:58.396524 | orchestrator | + set -e 2026-04-09 04:02:58.396640 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 04:02:58.397465 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 04:02:58.397509 | orchestrator | ++ INTERACTIVE=false 2026-04-09 04:02:58.397528 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 04:02:58.397545 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 04:02:58.397561 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 04:02:58.398658 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 04:02:58.398687 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 04:02:58.398697 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-09 04:02:58.398707 | orchestrator | ++ CEPH_VERSION=reef 2026-04-09 04:02:58.398717 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 04:02:58.398728 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 04:02:58.398738 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-09 04:02:58.398748 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-09 04:02:58.398758 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-09 04:02:58.398769 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-09 04:02:58.398778 | orchestrator | ++ export ARA=false 2026-04-09 04:02:58.398788 | orchestrator | ++ ARA=false 2026-04-09 04:02:58.398798 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 04:02:58.398807 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 04:02:58.398817 | orchestrator | ++ export TEMPEST=false 2026-04-09 04:02:58.398826 | orchestrator | ++ TEMPEST=false 2026-04-09 04:02:58.398836 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 04:02:58.398845 | orchestrator | ++ IS_ZUUL=true 2026-04-09 04:02:58.398855 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 04:02:58.398865 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 04:02:58.398874 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 04:02:58.398884 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 04:02:58.398894 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 04:02:58.398960 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 04:02:58.398973 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 04:02:58.398983 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 04:02:58.398993 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 04:02:58.399002 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 04:02:58.400336 | orchestrator | ++ semver 9.5.0 8.0.0 2026-04-09 04:02:58.479059 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-09 04:02:58.479162 | orchestrator | + osism apply clusterapi 2026-04-09 04:03:00.772865 | orchestrator | 2026-04-09 04:03:00 | INFO  | Task 7d787326-1451-49da-9559-4533a83e2095 (clusterapi) was prepared for execution. 2026-04-09 04:03:00.773055 | orchestrator | 2026-04-09 04:03:00 | INFO  | It takes a moment until task 7d787326-1451-49da-9559-4533a83e2095 (clusterapi) has been started and output is visible here. 2026-04-09 04:04:08.263106 | orchestrator | 2026-04-09 04:04:08.263195 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-04-09 04:04:08.263206 | orchestrator | 2026-04-09 04:04:08.263213 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-04-09 04:04:08.263220 | orchestrator | Thursday 09 April 2026 04:03:05 +0000 (0:00:00.205) 0:00:00.205 ******** 2026-04-09 04:04:08.263227 | orchestrator | included: cert_manager for testbed-manager 2026-04-09 04:04:08.263234 | orchestrator | 2026-04-09 04:04:08.263241 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-04-09 04:04:08.263247 | orchestrator | Thursday 09 April 2026 04:03:05 +0000 (0:00:00.278) 0:00:00.484 ******** 2026-04-09 04:04:08.263254 | orchestrator | changed: [testbed-manager] 2026-04-09 04:04:08.263261 | orchestrator | 2026-04-09 04:04:08.263268 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-04-09 04:04:08.263274 | orchestrator | Thursday 09 April 2026 04:03:11 +0000 (0:00:05.722) 0:00:06.206 ******** 2026-04-09 04:04:08.263280 | orchestrator | changed: [testbed-manager] 2026-04-09 04:04:08.263287 | orchestrator | 2026-04-09 04:04:08.263293 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-04-09 04:04:08.263299 | orchestrator | 2026-04-09 04:04:08.263306 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-04-09 04:04:08.263312 | orchestrator | Thursday 09 April 2026 04:03:45 +0000 (0:00:34.069) 0:00:40.276 ******** 2026-04-09 04:04:08.263318 | orchestrator | ok: [testbed-manager] 2026-04-09 04:04:08.263325 | orchestrator | 2026-04-09 04:04:08.263331 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-04-09 04:04:08.263353 | orchestrator | Thursday 09 April 2026 04:03:46 +0000 (0:00:01.283) 0:00:41.559 ******** 2026-04-09 04:04:08.263360 | orchestrator | ok: [testbed-manager] 2026-04-09 04:04:08.263366 | orchestrator | 2026-04-09 04:04:08.263373 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-04-09 04:04:08.263379 | orchestrator | Thursday 09 April 2026 04:03:47 +0000 (0:00:00.184) 0:00:41.744 ******** 2026-04-09 04:04:08.263386 | orchestrator | ok: [testbed-manager] 2026-04-09 04:04:08.263392 | orchestrator | 2026-04-09 04:04:08.263398 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-04-09 04:04:08.263405 | orchestrator | Thursday 09 April 2026 04:04:05 +0000 (0:00:17.959) 0:00:59.703 ******** 2026-04-09 04:04:08.263411 | orchestrator | skipping: [testbed-manager] 2026-04-09 04:04:08.263417 | orchestrator | 2026-04-09 04:04:08.263423 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-04-09 04:04:08.263430 | orchestrator | Thursday 09 April 2026 04:04:05 +0000 (0:00:00.137) 0:00:59.841 ******** 2026-04-09 04:04:08.263436 | orchestrator | changed: [testbed-manager] 2026-04-09 04:04:08.263442 | orchestrator | 2026-04-09 04:04:08.263449 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 04:04:08.263456 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 04:04:08.263462 | orchestrator | 2026-04-09 04:04:08.263469 | orchestrator | 2026-04-09 04:04:08.263475 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 04:04:08.263481 | orchestrator | Thursday 09 April 2026 04:04:07 +0000 (0:00:02.559) 0:01:02.400 ******** 2026-04-09 04:04:08.263487 | orchestrator | =============================================================================== 2026-04-09 04:04:08.263511 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 34.07s 2026-04-09 04:04:08.263518 | orchestrator | Initialize the CAPI management cluster --------------------------------- 17.96s 2026-04-09 04:04:08.263525 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.72s 2026-04-09 04:04:08.263531 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.56s 2026-04-09 04:04:08.263537 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.28s 2026-04-09 04:04:08.263543 | orchestrator | Include cert_manager role ----------------------------------------------- 0.28s 2026-04-09 04:04:08.263549 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.18s 2026-04-09 04:04:08.263555 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.14s 2026-04-09 04:04:08.638166 | orchestrator | + osism apply magnum 2026-04-09 04:04:10.923789 | orchestrator | 2026-04-09 04:04:10 | INFO  | Task 1efad940-e4e5-4f3b-bd35-3f98b363366c (magnum) was prepared for execution. 2026-04-09 04:04:10.923914 | orchestrator | 2026-04-09 04:04:10 | INFO  | It takes a moment until task 1efad940-e4e5-4f3b-bd35-3f98b363366c (magnum) has been started and output is visible here. 2026-04-09 04:04:55.448841 | orchestrator | 2026-04-09 04:04:55.448925 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 04:04:55.448936 | orchestrator | 2026-04-09 04:04:55.448945 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 04:04:55.448954 | orchestrator | Thursday 09 April 2026 04:04:15 +0000 (0:00:00.275) 0:00:00.275 ******** 2026-04-09 04:04:55.448961 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:04:55.448970 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:04:55.448978 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:04:55.448985 | orchestrator | 2026-04-09 04:04:55.448992 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 04:04:55.449001 | orchestrator | Thursday 09 April 2026 04:04:16 +0000 (0:00:00.340) 0:00:00.616 ******** 2026-04-09 04:04:55.449008 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-09 04:04:55.449016 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-09 04:04:55.449024 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-09 04:04:55.449032 | orchestrator | 2026-04-09 04:04:55.449040 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-09 04:04:55.449048 | orchestrator | 2026-04-09 04:04:55.449056 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-09 04:04:55.449063 | orchestrator | Thursday 09 April 2026 04:04:16 +0000 (0:00:00.501) 0:00:01.118 ******** 2026-04-09 04:04:55.449071 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:04:55.449079 | orchestrator | 2026-04-09 04:04:55.449087 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-04-09 04:04:55.449095 | orchestrator | Thursday 09 April 2026 04:04:17 +0000 (0:00:00.678) 0:00:01.796 ******** 2026-04-09 04:04:55.449105 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-04-09 04:04:55.449113 | orchestrator | 2026-04-09 04:04:55.449121 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-04-09 04:04:55.449129 | orchestrator | Thursday 09 April 2026 04:04:21 +0000 (0:00:03.703) 0:00:05.500 ******** 2026-04-09 04:04:55.449137 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-04-09 04:04:55.449146 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-04-09 04:04:55.449154 | orchestrator | 2026-04-09 04:04:55.449161 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-04-09 04:04:55.449169 | orchestrator | Thursday 09 April 2026 04:04:27 +0000 (0:00:06.749) 0:00:12.249 ******** 2026-04-09 04:04:55.449178 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 04:04:55.449208 | orchestrator | 2026-04-09 04:04:55.449214 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-04-09 04:04:55.449229 | orchestrator | Thursday 09 April 2026 04:04:31 +0000 (0:00:03.552) 0:00:15.802 ******** 2026-04-09 04:04:55.449234 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 04:04:55.449239 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-04-09 04:04:55.449244 | orchestrator | 2026-04-09 04:04:55.449248 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-04-09 04:04:55.449253 | orchestrator | Thursday 09 April 2026 04:04:35 +0000 (0:00:04.019) 0:00:19.821 ******** 2026-04-09 04:04:55.449257 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 04:04:55.449262 | orchestrator | 2026-04-09 04:04:55.449267 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-04-09 04:04:55.449271 | orchestrator | Thursday 09 April 2026 04:04:38 +0000 (0:00:03.369) 0:00:23.191 ******** 2026-04-09 04:04:55.449316 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-04-09 04:04:55.449321 | orchestrator | 2026-04-09 04:04:55.449326 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-04-09 04:04:55.449330 | orchestrator | Thursday 09 April 2026 04:04:42 +0000 (0:00:04.014) 0:00:27.205 ******** 2026-04-09 04:04:55.449335 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:04:55.449339 | orchestrator | 2026-04-09 04:04:55.449344 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-04-09 04:04:55.449348 | orchestrator | Thursday 09 April 2026 04:04:46 +0000 (0:00:03.435) 0:00:30.640 ******** 2026-04-09 04:04:55.449353 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:04:55.449357 | orchestrator | 2026-04-09 04:04:55.449362 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-04-09 04:04:55.449367 | orchestrator | Thursday 09 April 2026 04:04:50 +0000 (0:00:04.037) 0:00:34.678 ******** 2026-04-09 04:04:55.449371 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:04:55.449376 | orchestrator | 2026-04-09 04:04:55.449380 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-09 04:04:55.449385 | orchestrator | Thursday 09 April 2026 04:04:53 +0000 (0:00:03.483) 0:00:38.161 ******** 2026-04-09 04:04:55.449409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 04:04:55.449418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 04:04:55.449434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 04:04:55.449441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 04:04:55.449447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 04:04:55.449458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 04:05:03.371190 | orchestrator | 2026-04-09 04:05:03.372520 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-09 04:05:03.372619 | orchestrator | Thursday 09 April 2026 04:04:55 +0000 (0:00:01.728) 0:00:39.889 ******** 2026-04-09 04:05:03.372644 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:05:03.372669 | orchestrator | 2026-04-09 04:05:03.372691 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-09 04:05:03.372713 | orchestrator | Thursday 09 April 2026 04:04:55 +0000 (0:00:00.166) 0:00:40.056 ******** 2026-04-09 04:05:03.372735 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:05:03.372757 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:05:03.372818 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:05:03.372839 | orchestrator | 2026-04-09 04:05:03.372851 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-09 04:05:03.372862 | orchestrator | Thursday 09 April 2026 04:04:55 +0000 (0:00:00.329) 0:00:40.385 ******** 2026-04-09 04:05:03.372873 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 04:05:03.372884 | orchestrator | 2026-04-09 04:05:03.372896 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-09 04:05:03.372907 | orchestrator | Thursday 09 April 2026 04:04:56 +0000 (0:00:00.915) 0:00:41.301 ******** 2026-04-09 04:05:03.372921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 04:05:03.372952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 04:05:03.372964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 04:05:03.373006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 04:05:03.373030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 04:05:03.373042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 04:05:03.373053 | orchestrator | 2026-04-09 04:05:03.373070 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-09 04:05:03.373082 | orchestrator | Thursday 09 April 2026 04:04:59 +0000 (0:00:02.500) 0:00:43.801 ******** 2026-04-09 04:05:03.373093 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:05:03.373106 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:05:03.373116 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:05:03.373127 | orchestrator | 2026-04-09 04:05:03.373139 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-09 04:05:03.373149 | orchestrator | Thursday 09 April 2026 04:04:59 +0000 (0:00:00.579) 0:00:44.381 ******** 2026-04-09 04:05:03.373161 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:05:03.373173 | orchestrator | 2026-04-09 04:05:03.373184 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-09 04:05:03.373195 | orchestrator | Thursday 09 April 2026 04:05:00 +0000 (0:00:00.623) 0:00:45.004 ******** 2026-04-09 04:05:03.373206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 04:05:03.373258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 04:05:04.334866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 04:05:04.335010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 04:05:04.335029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 04:05:04.335041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 04:05:04.335053 | orchestrator | 2026-04-09 04:05:04.335066 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-09 04:05:04.335078 | orchestrator | Thursday 09 April 2026 04:05:03 +0000 (0:00:02.815) 0:00:47.820 ******** 2026-04-09 04:05:04.335133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-09 04:05:04.335147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 04:05:04.335158 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:05:04.335177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-09 04:05:04.335189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 04:05:04.335201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-09 04:05:04.335278 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:05:04.335302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 04:05:08.158142 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:05:08.158267 | orchestrator | 2026-04-09 04:05:08.158279 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-09 04:05:08.158288 | orchestrator | Thursday 09 April 2026 04:05:04 +0000 (0:00:00.956) 0:00:48.776 ******** 2026-04-09 04:05:08.158298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-09 04:05:08.158322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 04:05:08.158330 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:05:08.158337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-09 04:05:08.158362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 04:05:08.158369 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:05:08.158391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-09 04:05:08.158399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 04:05:08.158405 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:05:08.158412 | orchestrator | 2026-04-09 04:05:08.158419 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-09 04:05:08.158439 | orchestrator | Thursday 09 April 2026 04:05:05 +0000 (0:00:00.984) 0:00:49.760 ******** 2026-04-09 04:05:08.158447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 04:05:08.158460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 04:05:08.158472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 04:05:14.806595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 04:05:14.806729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 04:05:14.806736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 04:05:14.806758 | orchestrator | 2026-04-09 04:05:14.806764 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-09 04:05:14.806769 | orchestrator | Thursday 09 April 2026 04:05:08 +0000 (0:00:02.848) 0:00:52.609 ******** 2026-04-09 04:05:14.806774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 04:05:14.806790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 04:05:14.806794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 04:05:14.806801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 04:05:14.806805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 04:05:14.806814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 04:05:14.806818 | orchestrator | 2026-04-09 04:05:14.806823 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-09 04:05:14.806829 | orchestrator | Thursday 09 April 2026 04:05:14 +0000 (0:00:05.891) 0:00:58.500 ******** 2026-04-09 04:05:14.806845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-09 04:05:16.846422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 04:05:16.846497 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:05:16.846520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-09 04:05:16.846542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 04:05:16.846547 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:05:16.846552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-09 04:05:16.846567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 04:05:16.846572 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:05:16.846577 | orchestrator | 2026-04-09 04:05:16.846583 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-04-09 04:05:16.846589 | orchestrator | Thursday 09 April 2026 04:05:14 +0000 (0:00:00.758) 0:00:59.259 ******** 2026-04-09 04:05:16.846598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 04:05:16.846608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 04:05:16.846613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 04:05:16.846618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 04:05:16.846627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 04:06:13.030827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 04:06:13.030938 | orchestrator | 2026-04-09 04:06:13.030949 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-09 04:06:13.030957 | orchestrator | Thursday 09 April 2026 04:05:16 +0000 (0:00:02.031) 0:01:01.291 ******** 2026-04-09 04:06:13.030963 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:06:13.030970 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:06:13.030976 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:06:13.030983 | orchestrator | 2026-04-09 04:06:13.030989 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-04-09 04:06:13.030995 | orchestrator | Thursday 09 April 2026 04:05:17 +0000 (0:00:00.621) 0:01:01.912 ******** 2026-04-09 04:06:13.031002 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:06:13.031008 | orchestrator | 2026-04-09 04:06:13.031014 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-04-09 04:06:13.031020 | orchestrator | Thursday 09 April 2026 04:05:19 +0000 (0:00:02.313) 0:01:04.225 ******** 2026-04-09 04:06:13.031026 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:06:13.031032 | orchestrator | 2026-04-09 04:06:13.031039 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-09 04:06:13.031045 | orchestrator | Thursday 09 April 2026 04:05:22 +0000 (0:00:02.361) 0:01:06.587 ******** 2026-04-09 04:06:13.031051 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:06:13.031062 | orchestrator | 2026-04-09 04:06:13.031073 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-09 04:06:13.031084 | orchestrator | Thursday 09 April 2026 04:05:39 +0000 (0:00:17.377) 0:01:23.964 ******** 2026-04-09 04:06:13.031095 | orchestrator | 2026-04-09 04:06:13.031106 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-09 04:06:13.031117 | orchestrator | Thursday 09 April 2026 04:05:39 +0000 (0:00:00.077) 0:01:24.042 ******** 2026-04-09 04:06:13.031129 | orchestrator | 2026-04-09 04:06:13.031140 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-09 04:06:13.031152 | orchestrator | Thursday 09 April 2026 04:05:39 +0000 (0:00:00.081) 0:01:24.123 ******** 2026-04-09 04:06:13.031161 | orchestrator | 2026-04-09 04:06:13.031167 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-09 04:06:13.031174 | orchestrator | Thursday 09 April 2026 04:05:39 +0000 (0:00:00.078) 0:01:24.202 ******** 2026-04-09 04:06:13.031180 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:06:13.031186 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:06:13.031192 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:06:13.031199 | orchestrator | 2026-04-09 04:06:13.031205 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-09 04:06:13.031211 | orchestrator | Thursday 09 April 2026 04:06:01 +0000 (0:00:21.608) 0:01:45.811 ******** 2026-04-09 04:06:13.031217 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:06:13.031223 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:06:13.031229 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:06:13.031235 | orchestrator | 2026-04-09 04:06:13.031242 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 04:06:13.031249 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 04:06:13.031256 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 04:06:13.031263 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 04:06:13.031269 | orchestrator | 2026-04-09 04:06:13.031275 | orchestrator | 2026-04-09 04:06:13.031282 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 04:06:13.031294 | orchestrator | Thursday 09 April 2026 04:06:12 +0000 (0:00:11.287) 0:01:57.098 ******** 2026-04-09 04:06:13.031300 | orchestrator | =============================================================================== 2026-04-09 04:06:13.031307 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 21.61s 2026-04-09 04:06:13.031313 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.38s 2026-04-09 04:06:13.031319 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.29s 2026-04-09 04:06:13.031325 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.75s 2026-04-09 04:06:13.031331 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.89s 2026-04-09 04:06:13.031339 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.04s 2026-04-09 04:06:13.031347 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.02s 2026-04-09 04:06:13.031369 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.01s 2026-04-09 04:06:13.031377 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.70s 2026-04-09 04:06:13.031385 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.55s 2026-04-09 04:06:13.031392 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.48s 2026-04-09 04:06:13.031400 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.44s 2026-04-09 04:06:13.031407 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.37s 2026-04-09 04:06:13.031415 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.85s 2026-04-09 04:06:13.031428 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.82s 2026-04-09 04:06:13.031436 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.50s 2026-04-09 04:06:13.031443 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.36s 2026-04-09 04:06:13.031451 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.31s 2026-04-09 04:06:13.031458 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.03s 2026-04-09 04:06:13.031466 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.73s 2026-04-09 04:06:13.769386 | orchestrator | ok: Runtime: 1:47:12.865690 2026-04-09 04:06:14.034969 | 2026-04-09 04:06:14.035126 | TASK [Deploy in a nutshell] 2026-04-09 04:06:14.579708 | orchestrator | skipping: Conditional result was False 2026-04-09 04:06:14.594039 | 2026-04-09 04:06:14.594211 | TASK [Bootstrap services] 2026-04-09 04:06:15.340200 | orchestrator | 2026-04-09 04:06:15.340351 | orchestrator | # BOOTSTRAP 2026-04-09 04:06:15.340364 | orchestrator | 2026-04-09 04:06:15.340370 | orchestrator | + set -e 2026-04-09 04:06:15.340375 | orchestrator | + echo 2026-04-09 04:06:15.340381 | orchestrator | + echo '# BOOTSTRAP' 2026-04-09 04:06:15.340388 | orchestrator | + echo 2026-04-09 04:06:15.340411 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-04-09 04:06:15.349055 | orchestrator | + set -e 2026-04-09 04:06:15.349126 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-04-09 04:06:17.550346 | orchestrator | 2026-04-09 04:06:17 | INFO  | It takes a moment until task 17e8f70d-7485-41df-878f-050f199543b5 (flavor-manager) has been started and output is visible here. 2026-04-09 04:06:26.413275 | orchestrator | 2026-04-09 04:06:21 | INFO  | Flavor SCS-1L-1 created 2026-04-09 04:06:26.413410 | orchestrator | 2026-04-09 04:06:21 | INFO  | Flavor SCS-1L-1-5 created 2026-04-09 04:06:26.413428 | orchestrator | 2026-04-09 04:06:21 | INFO  | Flavor SCS-1V-2 created 2026-04-09 04:06:26.413441 | orchestrator | 2026-04-09 04:06:21 | INFO  | Flavor SCS-1V-2-5 created 2026-04-09 04:06:26.413453 | orchestrator | 2026-04-09 04:06:22 | INFO  | Flavor SCS-1V-4 created 2026-04-09 04:06:26.413464 | orchestrator | 2026-04-09 04:06:22 | INFO  | Flavor SCS-1V-4-10 created 2026-04-09 04:06:26.413476 | orchestrator | 2026-04-09 04:06:22 | INFO  | Flavor SCS-1V-8 created 2026-04-09 04:06:26.413488 | orchestrator | 2026-04-09 04:06:22 | INFO  | Flavor SCS-1V-8-20 created 2026-04-09 04:06:26.413515 | orchestrator | 2026-04-09 04:06:22 | INFO  | Flavor SCS-2V-4 created 2026-04-09 04:06:26.413527 | orchestrator | 2026-04-09 04:06:22 | INFO  | Flavor SCS-2V-4-10 created 2026-04-09 04:06:26.413538 | orchestrator | 2026-04-09 04:06:23 | INFO  | Flavor SCS-2V-8 created 2026-04-09 04:06:26.413549 | orchestrator | 2026-04-09 04:06:23 | INFO  | Flavor SCS-2V-8-20 created 2026-04-09 04:06:26.413560 | orchestrator | 2026-04-09 04:06:23 | INFO  | Flavor SCS-2V-16 created 2026-04-09 04:06:26.413571 | orchestrator | 2026-04-09 04:06:23 | INFO  | Flavor SCS-2V-16-50 created 2026-04-09 04:06:26.413582 | orchestrator | 2026-04-09 04:06:23 | INFO  | Flavor SCS-4V-8 created 2026-04-09 04:06:26.413593 | orchestrator | 2026-04-09 04:06:23 | INFO  | Flavor SCS-4V-8-20 created 2026-04-09 04:06:26.413604 | orchestrator | 2026-04-09 04:06:23 | INFO  | Flavor SCS-4V-16 created 2026-04-09 04:06:26.413615 | orchestrator | 2026-04-09 04:06:24 | INFO  | Flavor SCS-4V-16-50 created 2026-04-09 04:06:26.413626 | orchestrator | 2026-04-09 04:06:24 | INFO  | Flavor SCS-4V-32 created 2026-04-09 04:06:26.413637 | orchestrator | 2026-04-09 04:06:24 | INFO  | Flavor SCS-4V-32-100 created 2026-04-09 04:06:26.413648 | orchestrator | 2026-04-09 04:06:24 | INFO  | Flavor SCS-8V-16 created 2026-04-09 04:06:26.413659 | orchestrator | 2026-04-09 04:06:24 | INFO  | Flavor SCS-8V-16-50 created 2026-04-09 04:06:26.413670 | orchestrator | 2026-04-09 04:06:24 | INFO  | Flavor SCS-8V-32 created 2026-04-09 04:06:26.413681 | orchestrator | 2026-04-09 04:06:25 | INFO  | Flavor SCS-8V-32-100 created 2026-04-09 04:06:26.413692 | orchestrator | 2026-04-09 04:06:25 | INFO  | Flavor SCS-16V-32 created 2026-04-09 04:06:26.413703 | orchestrator | 2026-04-09 04:06:25 | INFO  | Flavor SCS-16V-32-100 created 2026-04-09 04:06:26.413752 | orchestrator | 2026-04-09 04:06:25 | INFO  | Flavor SCS-2V-4-20s created 2026-04-09 04:06:26.413764 | orchestrator | 2026-04-09 04:06:25 | INFO  | Flavor SCS-4V-8-50s created 2026-04-09 04:06:26.413775 | orchestrator | 2026-04-09 04:06:26 | INFO  | Flavor SCS-8V-32-100s created 2026-04-09 04:06:28.775834 | orchestrator | 2026-04-09 04:06:28 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-04-09 04:06:38.935214 | orchestrator | 2026-04-09 04:06:38 | INFO  | Task 914be443-33dc-456a-a7f2-b33cb3e5ef65 (bootstrap-basic) was prepared for execution. 2026-04-09 04:06:38.935363 | orchestrator | 2026-04-09 04:06:38 | INFO  | It takes a moment until task 914be443-33dc-456a-a7f2-b33cb3e5ef65 (bootstrap-basic) has been started and output is visible here. 2026-04-09 04:07:23.466154 | orchestrator | 2026-04-09 04:07:23.466273 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-04-09 04:07:23.466292 | orchestrator | 2026-04-09 04:07:23.466304 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 04:07:23.466316 | orchestrator | Thursday 09 April 2026 04:06:43 +0000 (0:00:00.077) 0:00:00.077 ******** 2026-04-09 04:07:23.466328 | orchestrator | ok: [localhost] 2026-04-09 04:07:23.466340 | orchestrator | 2026-04-09 04:07:23.466351 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-04-09 04:07:23.466363 | orchestrator | Thursday 09 April 2026 04:06:45 +0000 (0:00:01.865) 0:00:01.943 ******** 2026-04-09 04:07:23.466373 | orchestrator | ok: [localhost] 2026-04-09 04:07:23.466384 | orchestrator | 2026-04-09 04:07:23.466396 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-04-09 04:07:23.466407 | orchestrator | Thursday 09 April 2026 04:06:53 +0000 (0:00:07.977) 0:00:09.920 ******** 2026-04-09 04:07:23.466418 | orchestrator | changed: [localhost] 2026-04-09 04:07:23.466429 | orchestrator | 2026-04-09 04:07:23.466440 | orchestrator | TASK [Create public network] *************************************************** 2026-04-09 04:07:23.466532 | orchestrator | Thursday 09 April 2026 04:06:59 +0000 (0:00:06.454) 0:00:16.375 ******** 2026-04-09 04:07:23.466544 | orchestrator | changed: [localhost] 2026-04-09 04:07:23.466555 | orchestrator | 2026-04-09 04:07:23.466567 | orchestrator | TASK [Set public network to default] ******************************************* 2026-04-09 04:07:23.466578 | orchestrator | Thursday 09 April 2026 04:07:05 +0000 (0:00:05.371) 0:00:21.747 ******** 2026-04-09 04:07:23.466594 | orchestrator | changed: [localhost] 2026-04-09 04:07:23.466608 | orchestrator | 2026-04-09 04:07:23.466621 | orchestrator | TASK [Create public subnet] **************************************************** 2026-04-09 04:07:23.466634 | orchestrator | Thursday 09 April 2026 04:07:11 +0000 (0:00:06.482) 0:00:28.229 ******** 2026-04-09 04:07:23.466648 | orchestrator | changed: [localhost] 2026-04-09 04:07:23.466661 | orchestrator | 2026-04-09 04:07:23.466673 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-04-09 04:07:23.466686 | orchestrator | Thursday 09 April 2026 04:07:15 +0000 (0:00:04.152) 0:00:32.382 ******** 2026-04-09 04:07:23.466699 | orchestrator | changed: [localhost] 2026-04-09 04:07:23.466711 | orchestrator | 2026-04-09 04:07:23.466725 | orchestrator | TASK [Create manager role] ***************************************************** 2026-04-09 04:07:23.466748 | orchestrator | Thursday 09 April 2026 04:07:19 +0000 (0:00:03.965) 0:00:36.347 ******** 2026-04-09 04:07:23.466762 | orchestrator | ok: [localhost] 2026-04-09 04:07:23.466775 | orchestrator | 2026-04-09 04:07:23.466788 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 04:07:23.466801 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 04:07:23.466816 | orchestrator | 2026-04-09 04:07:23.466828 | orchestrator | 2026-04-09 04:07:23.466841 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 04:07:23.466854 | orchestrator | Thursday 09 April 2026 04:07:23 +0000 (0:00:03.593) 0:00:39.941 ******** 2026-04-09 04:07:23.466867 | orchestrator | =============================================================================== 2026-04-09 04:07:23.466879 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.98s 2026-04-09 04:07:23.466893 | orchestrator | Set public network to default ------------------------------------------- 6.48s 2026-04-09 04:07:23.466906 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.45s 2026-04-09 04:07:23.466919 | orchestrator | Create public network --------------------------------------------------- 5.37s 2026-04-09 04:07:23.466960 | orchestrator | Create public subnet ---------------------------------------------------- 4.15s 2026-04-09 04:07:23.466972 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.97s 2026-04-09 04:07:23.466983 | orchestrator | Create manager role ----------------------------------------------------- 3.59s 2026-04-09 04:07:23.466994 | orchestrator | Gathering Facts --------------------------------------------------------- 1.87s 2026-04-09 04:07:26.028147 | orchestrator | 2026-04-09 04:07:26 | INFO  | It takes a moment until task 1ea5c5de-8bd1-4fde-bf86-4dcb82db25ae (image-manager) has been started and output is visible here. 2026-04-09 04:08:10.136899 | orchestrator | 2026-04-09 04:07:28 | INFO  | Processing image 'Cirros 0.6.2' 2026-04-09 04:08:10.137015 | orchestrator | 2026-04-09 04:07:29 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-04-09 04:08:10.137030 | orchestrator | 2026-04-09 04:07:29 | INFO  | Importing image Cirros 0.6.2 2026-04-09 04:08:10.137038 | orchestrator | 2026-04-09 04:07:29 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-09 04:08:10.137046 | orchestrator | 2026-04-09 04:07:31 | INFO  | Waiting for image to leave queued state... 2026-04-09 04:08:10.137055 | orchestrator | 2026-04-09 04:07:33 | INFO  | Waiting for import to complete... 2026-04-09 04:08:10.137062 | orchestrator | 2026-04-09 04:07:43 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-04-09 04:08:10.137071 | orchestrator | 2026-04-09 04:07:43 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-04-09 04:08:10.137113 | orchestrator | 2026-04-09 04:07:43 | INFO  | Setting internal_version = 0.6.2 2026-04-09 04:08:10.137122 | orchestrator | 2026-04-09 04:07:43 | INFO  | Setting image_original_user = cirros 2026-04-09 04:08:10.137130 | orchestrator | 2026-04-09 04:07:43 | INFO  | Adding tag os:cirros 2026-04-09 04:08:10.137137 | orchestrator | 2026-04-09 04:07:44 | INFO  | Setting property architecture: x86_64 2026-04-09 04:08:10.137143 | orchestrator | 2026-04-09 04:07:44 | INFO  | Setting property hw_disk_bus: scsi 2026-04-09 04:08:10.137149 | orchestrator | 2026-04-09 04:07:44 | INFO  | Setting property hw_rng_model: virtio 2026-04-09 04:08:10.137155 | orchestrator | 2026-04-09 04:07:45 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-09 04:08:10.137162 | orchestrator | 2026-04-09 04:07:45 | INFO  | Setting property hw_watchdog_action: reset 2026-04-09 04:08:10.137168 | orchestrator | 2026-04-09 04:07:45 | INFO  | Setting property hypervisor_type: qemu 2026-04-09 04:08:10.137174 | orchestrator | 2026-04-09 04:07:45 | INFO  | Setting property os_distro: cirros 2026-04-09 04:08:10.137180 | orchestrator | 2026-04-09 04:07:46 | INFO  | Setting property os_purpose: minimal 2026-04-09 04:08:10.137186 | orchestrator | 2026-04-09 04:07:46 | INFO  | Setting property replace_frequency: never 2026-04-09 04:08:10.137192 | orchestrator | 2026-04-09 04:07:46 | INFO  | Setting property uuid_validity: none 2026-04-09 04:08:10.137198 | orchestrator | 2026-04-09 04:07:46 | INFO  | Setting property provided_until: none 2026-04-09 04:08:10.137205 | orchestrator | 2026-04-09 04:07:47 | INFO  | Setting property image_description: Cirros 2026-04-09 04:08:10.137211 | orchestrator | 2026-04-09 04:07:47 | INFO  | Setting property image_name: Cirros 2026-04-09 04:08:10.137218 | orchestrator | 2026-04-09 04:07:47 | INFO  | Setting property internal_version: 0.6.2 2026-04-09 04:08:10.137223 | orchestrator | 2026-04-09 04:07:47 | INFO  | Setting property image_original_user: cirros 2026-04-09 04:08:10.137252 | orchestrator | 2026-04-09 04:07:48 | INFO  | Setting property os_version: 0.6.2 2026-04-09 04:08:10.137268 | orchestrator | 2026-04-09 04:07:48 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-09 04:08:10.137276 | orchestrator | 2026-04-09 04:07:48 | INFO  | Setting property image_build_date: 2023-05-30 2026-04-09 04:08:10.137307 | orchestrator | 2026-04-09 04:07:49 | INFO  | Checking status of 'Cirros 0.6.2' 2026-04-09 04:08:10.137313 | orchestrator | 2026-04-09 04:07:49 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-04-09 04:08:10.137319 | orchestrator | 2026-04-09 04:07:49 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-04-09 04:08:10.137326 | orchestrator | 2026-04-09 04:07:49 | INFO  | Processing image 'Cirros 0.6.3' 2026-04-09 04:08:10.137336 | orchestrator | 2026-04-09 04:07:49 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-04-09 04:08:10.137342 | orchestrator | 2026-04-09 04:07:49 | INFO  | Importing image Cirros 0.6.3 2026-04-09 04:08:10.137349 | orchestrator | 2026-04-09 04:07:49 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-09 04:08:10.137355 | orchestrator | 2026-04-09 04:07:51 | INFO  | Waiting for image to leave queued state... 2026-04-09 04:08:10.137362 | orchestrator | 2026-04-09 04:07:53 | INFO  | Waiting for import to complete... 2026-04-09 04:08:10.137384 | orchestrator | 2026-04-09 04:08:03 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-04-09 04:08:10.137391 | orchestrator | 2026-04-09 04:08:04 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-04-09 04:08:10.137398 | orchestrator | 2026-04-09 04:08:04 | INFO  | Setting internal_version = 0.6.3 2026-04-09 04:08:10.137404 | orchestrator | 2026-04-09 04:08:04 | INFO  | Setting image_original_user = cirros 2026-04-09 04:08:10.137409 | orchestrator | 2026-04-09 04:08:04 | INFO  | Adding tag os:cirros 2026-04-09 04:08:10.137414 | orchestrator | 2026-04-09 04:08:04 | INFO  | Setting property architecture: x86_64 2026-04-09 04:08:10.137419 | orchestrator | 2026-04-09 04:08:04 | INFO  | Setting property hw_disk_bus: scsi 2026-04-09 04:08:10.137425 | orchestrator | 2026-04-09 04:08:05 | INFO  | Setting property hw_rng_model: virtio 2026-04-09 04:08:10.137431 | orchestrator | 2026-04-09 04:08:05 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-09 04:08:10.137436 | orchestrator | 2026-04-09 04:08:05 | INFO  | Setting property hw_watchdog_action: reset 2026-04-09 04:08:10.137442 | orchestrator | 2026-04-09 04:08:05 | INFO  | Setting property hypervisor_type: qemu 2026-04-09 04:08:10.137448 | orchestrator | 2026-04-09 04:08:06 | INFO  | Setting property os_distro: cirros 2026-04-09 04:08:10.137455 | orchestrator | 2026-04-09 04:08:06 | INFO  | Setting property os_purpose: minimal 2026-04-09 04:08:10.137461 | orchestrator | 2026-04-09 04:08:06 | INFO  | Setting property replace_frequency: never 2026-04-09 04:08:10.137468 | orchestrator | 2026-04-09 04:08:06 | INFO  | Setting property uuid_validity: none 2026-04-09 04:08:10.137475 | orchestrator | 2026-04-09 04:08:06 | INFO  | Setting property provided_until: none 2026-04-09 04:08:10.137482 | orchestrator | 2026-04-09 04:08:07 | INFO  | Setting property image_description: Cirros 2026-04-09 04:08:10.137488 | orchestrator | 2026-04-09 04:08:07 | INFO  | Setting property image_name: Cirros 2026-04-09 04:08:10.137495 | orchestrator | 2026-04-09 04:08:07 | INFO  | Setting property internal_version: 0.6.3 2026-04-09 04:08:10.137508 | orchestrator | 2026-04-09 04:08:08 | INFO  | Setting property image_original_user: cirros 2026-04-09 04:08:10.137514 | orchestrator | 2026-04-09 04:08:08 | INFO  | Setting property os_version: 0.6.3 2026-04-09 04:08:10.137521 | orchestrator | 2026-04-09 04:08:08 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-09 04:08:10.137527 | orchestrator | 2026-04-09 04:08:08 | INFO  | Setting property image_build_date: 2024-09-26 2026-04-09 04:08:10.137534 | orchestrator | 2026-04-09 04:08:09 | INFO  | Checking status of 'Cirros 0.6.3' 2026-04-09 04:08:10.137539 | orchestrator | 2026-04-09 04:08:09 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-04-09 04:08:10.137545 | orchestrator | 2026-04-09 04:08:09 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-04-09 04:08:10.462119 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amphora-image.sh 2026-04-09 04:08:12.811459 | orchestrator | 2026-04-09 04:08:12 | INFO  | date: 2026-04-09 2026-04-09 04:08:12.811530 | orchestrator | 2026-04-09 04:08:12 | INFO  | image: octavia-amphora-haproxy-2024.2.20260409.qcow2 2026-04-09 04:08:12.811555 | orchestrator | 2026-04-09 04:08:12 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260409.qcow2 2026-04-09 04:08:12.811563 | orchestrator | 2026-04-09 04:08:12 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260409.qcow2.CHECKSUM 2026-04-09 04:08:12.971996 | orchestrator | 2026-04-09 04:08:12 | INFO  | checksum: 8d87a584e20490e0986eb683817610aad621ddd76b8738398584d5449d1a8f22 2026-04-09 04:08:13.049743 | orchestrator | 2026-04-09 04:08:13 | INFO  | It takes a moment until task af73bc47-2f19-4f74-8826-1240b5eddbb3 (image-manager) has been started and output is visible here. 2026-04-09 04:09:26.491381 | orchestrator | 2026-04-09 04:08:15 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-04-09' 2026-04-09 04:09:26.491526 | orchestrator | 2026-04-09 04:08:15 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260409.qcow2: 200 2026-04-09 04:09:26.491549 | orchestrator | 2026-04-09 04:08:15 | INFO  | Importing image OpenStack Octavia Amphora 2026-04-09 2026-04-09 04:09:26.491565 | orchestrator | 2026-04-09 04:08:15 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260409.qcow2 2026-04-09 04:09:26.491581 | orchestrator | 2026-04-09 04:08:16 | INFO  | Waiting for image to leave queued state... 2026-04-09 04:09:26.491596 | orchestrator | 2026-04-09 04:08:18 | INFO  | Waiting for import to complete... 2026-04-09 04:09:26.491612 | orchestrator | 2026-04-09 04:08:29 | INFO  | Waiting for import to complete... 2026-04-09 04:09:26.491666 | orchestrator | 2026-04-09 04:08:39 | INFO  | Waiting for import to complete... 2026-04-09 04:09:26.491683 | orchestrator | 2026-04-09 04:08:49 | INFO  | Waiting for import to complete... 2026-04-09 04:09:26.491699 | orchestrator | 2026-04-09 04:08:59 | INFO  | Waiting for import to complete... 2026-04-09 04:09:26.491715 | orchestrator | 2026-04-09 04:09:09 | INFO  | Waiting for import to complete... 2026-04-09 04:09:26.491729 | orchestrator | 2026-04-09 04:09:19 | INFO  | Import of 'OpenStack Octavia Amphora 2026-04-09' successfully completed, reloading images 2026-04-09 04:09:26.491745 | orchestrator | 2026-04-09 04:09:20 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-04-09' 2026-04-09 04:09:26.491791 | orchestrator | 2026-04-09 04:09:20 | INFO  | Setting internal_version = 2026-04-09 2026-04-09 04:09:26.491805 | orchestrator | 2026-04-09 04:09:20 | INFO  | Setting image_original_user = ubuntu 2026-04-09 04:09:26.491820 | orchestrator | 2026-04-09 04:09:20 | INFO  | Adding tag amphora 2026-04-09 04:09:26.491835 | orchestrator | 2026-04-09 04:09:20 | INFO  | Adding tag os:ubuntu 2026-04-09 04:09:26.491850 | orchestrator | 2026-04-09 04:09:20 | INFO  | Setting property architecture: x86_64 2026-04-09 04:09:26.491865 | orchestrator | 2026-04-09 04:09:21 | INFO  | Setting property hw_disk_bus: scsi 2026-04-09 04:09:26.491878 | orchestrator | 2026-04-09 04:09:21 | INFO  | Setting property hw_rng_model: virtio 2026-04-09 04:09:26.491893 | orchestrator | 2026-04-09 04:09:21 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-09 04:09:26.491909 | orchestrator | 2026-04-09 04:09:22 | INFO  | Setting property hw_watchdog_action: reset 2026-04-09 04:09:26.491928 | orchestrator | 2026-04-09 04:09:22 | INFO  | Setting property hypervisor_type: qemu 2026-04-09 04:09:26.491943 | orchestrator | 2026-04-09 04:09:22 | INFO  | Setting property os_distro: ubuntu 2026-04-09 04:09:26.491959 | orchestrator | 2026-04-09 04:09:22 | INFO  | Setting property replace_frequency: quarterly 2026-04-09 04:09:26.491976 | orchestrator | 2026-04-09 04:09:23 | INFO  | Setting property uuid_validity: last-1 2026-04-09 04:09:26.491993 | orchestrator | 2026-04-09 04:09:23 | INFO  | Setting property provided_until: none 2026-04-09 04:09:26.492008 | orchestrator | 2026-04-09 04:09:23 | INFO  | Setting property os_purpose: network 2026-04-09 04:09:26.492065 | orchestrator | 2026-04-09 04:09:24 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-04-09 04:09:26.492081 | orchestrator | 2026-04-09 04:09:24 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-04-09 04:09:26.492096 | orchestrator | 2026-04-09 04:09:24 | INFO  | Setting property internal_version: 2026-04-09 2026-04-09 04:09:26.492111 | orchestrator | 2026-04-09 04:09:24 | INFO  | Setting property image_original_user: ubuntu 2026-04-09 04:09:26.492126 | orchestrator | 2026-04-09 04:09:24 | INFO  | Setting property os_version: 2026-04-09 2026-04-09 04:09:26.492143 | orchestrator | 2026-04-09 04:09:25 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260409.qcow2 2026-04-09 04:09:26.492158 | orchestrator | 2026-04-09 04:09:25 | INFO  | Setting property image_build_date: 2026-04-09 2026-04-09 04:09:26.492173 | orchestrator | 2026-04-09 04:09:25 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-04-09' 2026-04-09 04:09:26.492187 | orchestrator | 2026-04-09 04:09:25 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-04-09' 2026-04-09 04:09:26.492222 | orchestrator | 2026-04-09 04:09:26 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-04-09 04:09:26.492239 | orchestrator | 2026-04-09 04:09:26 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-04-09 04:09:26.492256 | orchestrator | 2026-04-09 04:09:26 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-04-09 04:09:26.492272 | orchestrator | 2026-04-09 04:09:26 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-04-09 04:09:27.273725 | orchestrator | ok: Runtime: 0:03:12.079707 2026-04-09 04:09:27.292327 | 2026-04-09 04:09:27.292497 | TASK [Run checks] 2026-04-09 04:09:28.092257 | orchestrator | + set -e 2026-04-09 04:09:28.092459 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 04:09:28.092486 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 04:09:28.092508 | orchestrator | ++ INTERACTIVE=false 2026-04-09 04:09:28.092522 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 04:09:28.092535 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 04:09:28.092549 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-09 04:09:28.093214 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-09 04:09:28.100364 | orchestrator | 2026-04-09 04:09:28.100482 | orchestrator | # CHECK 2026-04-09 04:09:28.100509 | orchestrator | 2026-04-09 04:09:28.100529 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-09 04:09:28.100554 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-09 04:09:28.100573 | orchestrator | + echo 2026-04-09 04:09:28.100591 | orchestrator | + echo '# CHECK' 2026-04-09 04:09:28.100607 | orchestrator | + echo 2026-04-09 04:09:28.100629 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-09 04:09:28.101259 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-09 04:09:28.165641 | orchestrator | 2026-04-09 04:09:28.165746 | orchestrator | ## Containers @ testbed-manager 2026-04-09 04:09:28.165763 | orchestrator | 2026-04-09 04:09:28.165778 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-09 04:09:28.165791 | orchestrator | + echo 2026-04-09 04:09:28.165803 | orchestrator | + echo '## Containers @ testbed-manager' 2026-04-09 04:09:28.165815 | orchestrator | + echo 2026-04-09 04:09:28.165827 | orchestrator | + osism container testbed-manager ps 2026-04-09 04:09:30.429253 | orchestrator | 2026-04-09 04:09:30 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-04-09 04:09:30.821427 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-09 04:09:30.821516 | orchestrator | a9310b0286a4 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-04-09 04:09:30.821529 | orchestrator | 4f590adef399 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-04-09 04:09:30.821540 | orchestrator | 80246b80497c registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-04-09 04:09:30.821545 | orchestrator | a1997ee106fb registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-09 04:09:30.821550 | orchestrator | 8c9166f8d97c registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-04-09 04:09:30.821558 | orchestrator | 5340c79f7457 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" About an hour ago Up About an hour cephclient 2026-04-09 04:09:30.821563 | orchestrator | 87dad6624cbb registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-09 04:09:30.821569 | orchestrator | 0af42731a711 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-09 04:09:30.821589 | orchestrator | 38cf9b390bd4 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-09 04:09:30.821595 | orchestrator | ea3e30651ba7 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-04-09 04:09:30.821600 | orchestrator | 206db9928f06 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-04-09 04:09:30.821605 | orchestrator | b4500a1bc2c9 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-04-09 04:09:30.821610 | orchestrator | 394158c29096 registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-04-09 04:09:30.821615 | orchestrator | 9a9dccf2d25e registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-04-09 04:09:30.821635 | orchestrator | 1c30873b65f2 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-04-09 04:09:30.821641 | orchestrator | 9227dc508fab registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-04-09 04:09:30.821646 | orchestrator | 185cb668e26a registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-04-09 04:09:30.821651 | orchestrator | 44e4081900b2 registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-04-09 04:09:30.821656 | orchestrator | f9243be48676 registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-04-09 04:09:30.821660 | orchestrator | 558dff013697 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-04-09 04:09:30.821665 | orchestrator | 88ac8be65a6d registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-04-09 04:09:30.821670 | orchestrator | a4326919cc32 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-04-09 04:09:30.821680 | orchestrator | d28f2daf4d79 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-04-09 04:09:30.821685 | orchestrator | e72a4e7b875b registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-04-09 04:09:30.821690 | orchestrator | 4b9901ad3bd7 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-04-09 04:09:30.821695 | orchestrator | 0307b8233257 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-04-09 04:09:30.821699 | orchestrator | 36b0e5ff9d56 registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-04-09 04:09:30.821704 | orchestrator | 6f980e4bb9c4 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-04-09 04:09:30.821711 | orchestrator | 4e261ba6b76b registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-04-09 04:09:30.821716 | orchestrator | 4ff1970252fa registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-04-09 04:09:31.316416 | orchestrator | 2026-04-09 04:09:31.316509 | orchestrator | ## Images @ testbed-manager 2026-04-09 04:09:31.316521 | orchestrator | 2026-04-09 04:09:31.316530 | orchestrator | + echo 2026-04-09 04:09:31.316539 | orchestrator | + echo '## Images @ testbed-manager' 2026-04-09 04:09:31.316548 | orchestrator | + echo 2026-04-09 04:09:31.316559 | orchestrator | + osism container testbed-manager images 2026-04-09 04:09:33.872256 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-09 04:09:33.872372 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 7cc0762d03ae 24 hours ago 239MB 2026-04-09 04:09:33.872389 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-04-09 04:09:33.872401 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 months ago 11.5MB 2026-04-09 04:09:33.872412 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 4 months ago 608MB 2026-04-09 04:09:33.872426 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-09 04:09:33.872438 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-09 04:09:33.872449 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-09 04:09:33.872460 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 4 months ago 308MB 2026-04-09 04:09:33.872471 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-09 04:09:33.872506 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 4 months ago 404MB 2026-04-09 04:09:33.872517 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 4 months ago 839MB 2026-04-09 04:09:33.872528 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-09 04:09:33.872539 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 4 months ago 330MB 2026-04-09 04:09:33.872550 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 4 months ago 613MB 2026-04-09 04:09:33.872584 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 4 months ago 560MB 2026-04-09 04:09:33.872596 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 4 months ago 1.23GB 2026-04-09 04:09:33.872621 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 4 months ago 383MB 2026-04-09 04:09:33.872632 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 4 months ago 238MB 2026-04-09 04:09:33.872659 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-04-09 04:09:33.872671 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 6 months ago 742MB 2026-04-09 04:09:33.872682 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-04-09 04:09:33.872693 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-04-09 04:09:33.872704 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 11 months ago 453MB 2026-04-09 04:09:33.872715 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 22 months ago 146MB 2026-04-09 04:09:33.872725 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-04-09 04:09:34.332058 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-09 04:09:34.332279 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-09 04:09:34.384717 | orchestrator | 2026-04-09 04:09:34.384832 | orchestrator | ## Containers @ testbed-node-0 2026-04-09 04:09:34.384859 | orchestrator | 2026-04-09 04:09:34.384879 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-09 04:09:34.384901 | orchestrator | + echo 2026-04-09 04:09:34.384920 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-04-09 04:09:34.384933 | orchestrator | + echo 2026-04-09 04:09:34.384944 | orchestrator | + osism container testbed-node-0 ps 2026-04-09 04:09:36.998153 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-09 04:09:36.998245 | orchestrator | b171134f965e registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-04-09 04:09:36.998258 | orchestrator | 70024c9749b1 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-04-09 04:09:36.998268 | orchestrator | 61298db88edf registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-04-09 04:09:36.998276 | orchestrator | 69954de2f269 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-04-09 04:09:36.998306 | orchestrator | 3497a8d4e8db registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-04-09 04:09:36.998315 | orchestrator | bc55b0584805 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-04-09 04:09:36.998329 | orchestrator | 2a38268a2ce4 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-04-09 04:09:36.998338 | orchestrator | 969f88871925 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-09 04:09:36.998347 | orchestrator | de955d63b37a registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-04-09 04:09:36.998355 | orchestrator | edb0ca0a35df registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-04-09 04:09:36.998363 | orchestrator | d5238ee09971 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-04-09 04:09:36.998371 | orchestrator | 27a2c1733a22 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-04-09 04:09:36.998378 | orchestrator | 46bc163ef082 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-04-09 04:09:36.998386 | orchestrator | 3352755813a3 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-04-09 04:09:36.998394 | orchestrator | 3f4892193986 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-04-09 04:09:36.998402 | orchestrator | 6a135b5cb693 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-04-09 04:09:36.998414 | orchestrator | 71b2c0b8fdf7 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-04-09 04:09:36.998423 | orchestrator | 11d54413adc6 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-04-09 04:09:36.998431 | orchestrator | 14b73bbfd4df registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-04-09 04:09:36.998456 | orchestrator | cb2a6433d162 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-04-09 04:09:36.998465 | orchestrator | bbdc768186c7 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-04-09 04:09:36.998473 | orchestrator | 4ca434b3d982 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-04-09 04:09:36.998487 | orchestrator | c6388a89308b registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-04-09 04:09:36.998495 | orchestrator | b5dbf310078b registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-04-09 04:09:36.998503 | orchestrator | 88aae7d2df47 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-04-09 04:09:36.998516 | orchestrator | 87109a4172dc registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-04-09 04:09:36.998524 | orchestrator | 2a6f8c6c4848 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-04-09 04:09:36.998532 | orchestrator | 6fea8364e62d registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-04-09 04:09:36.998540 | orchestrator | 019b3e12f4b3 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-04-09 04:09:36.998548 | orchestrator | 82f0ece9f21c registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-04-09 04:09:36.998556 | orchestrator | 6d23e680ba13 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-04-09 04:09:36.998564 | orchestrator | f255844cb8ef registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-04-09 04:09:36.998572 | orchestrator | eed4b2d38712 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-04-09 04:09:36.998580 | orchestrator | 453779083a22 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_volume 2026-04-09 04:09:36.998588 | orchestrator | b244d25de170 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-04-09 04:09:36.998595 | orchestrator | 369adcff510b registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-04-09 04:09:36.998603 | orchestrator | 0e816ee0fd93 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-04-09 04:09:36.998611 | orchestrator | 70432375546f registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-04-09 04:09:36.998623 | orchestrator | 467c9866922d registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_apiserver 2026-04-09 04:09:36.998643 | orchestrator | 33cf283927e1 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) horizon 2026-04-09 04:09:36.998651 | orchestrator | d3fc5bce4b3b registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_novncproxy 2026-04-09 04:09:36.998660 | orchestrator | 2678b34d6e85 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_conductor 2026-04-09 04:09:36.998667 | orchestrator | cec277db3313 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_api 2026-04-09 04:09:36.998675 | orchestrator | 3a0ad9da6343 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_scheduler 2026-04-09 04:09:36.998683 | orchestrator | b798d40328e2 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) neutron_server 2026-04-09 04:09:36.998691 | orchestrator | 5eae6438e51b registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) placement_api 2026-04-09 04:09:36.998699 | orchestrator | aceec82781ac registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone 2026-04-09 04:09:36.998707 | orchestrator | 2c0631febd30 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 56 minutes (healthy) keystone_fernet 2026-04-09 04:09:36.998715 | orchestrator | 35fa160dd7aa registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone_ssh 2026-04-09 04:09:36.998723 | orchestrator | 4158873766f4 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 59 minutes ago Up 59 minutes ceph-mgr-testbed-node-0 2026-04-09 04:09:36.998731 | orchestrator | c23e9d50a5ef registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-04-09 04:09:36.998742 | orchestrator | 3b46de499f20 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-04-09 04:09:36.998751 | orchestrator | d42b0469d32a registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-04-09 04:09:36.998758 | orchestrator | 09c7033d49a6 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-04-09 04:09:36.998766 | orchestrator | e0bd5feb6616 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-04-09 04:09:36.998774 | orchestrator | c74084405836 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-04-09 04:09:36.998782 | orchestrator | f7412afe8200 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-04-09 04:09:36.998796 | orchestrator | 9998551cc9e1 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-04-09 04:09:36.998804 | orchestrator | 2c16344e7142 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-04-09 04:09:36.998817 | orchestrator | 55da8d2dd800 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-04-09 04:09:36.998825 | orchestrator | 0b29a7d6f786 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-04-09 04:09:36.998833 | orchestrator | a0323ee0a4c4 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis 2026-04-09 04:09:36.998841 | orchestrator | 04d7f18cf6b5 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) memcached 2026-04-09 04:09:36.998849 | orchestrator | 693927d06ba0 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-04-09 04:09:36.998857 | orchestrator | ef8574ce5e35 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-04-09 04:09:36.998865 | orchestrator | 6cd1c8931579 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-04-09 04:09:36.998873 | orchestrator | b0733789545b registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-04-09 04:09:36.998881 | orchestrator | 403242a36550 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-04-09 04:09:36.998889 | orchestrator | 276264858dd3 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-09 04:09:36.998897 | orchestrator | 7af3e4ecfd42 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-09 04:09:36.998905 | orchestrator | 654d309e67e6 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-09 04:09:37.439498 | orchestrator | 2026-04-09 04:09:37.439586 | orchestrator | ## Images @ testbed-node-0 2026-04-09 04:09:37.439598 | orchestrator | 2026-04-09 04:09:37.439608 | orchestrator | + echo 2026-04-09 04:09:37.439616 | orchestrator | + echo '## Images @ testbed-node-0' 2026-04-09 04:09:37.439626 | orchestrator | + echo 2026-04-09 04:09:37.439634 | orchestrator | + osism container testbed-node-0 images 2026-04-09 04:09:40.118938 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-09 04:09:40.119131 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-09 04:09:40.119159 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-09 04:09:40.119172 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-09 04:09:40.119219 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-09 04:09:40.119232 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-09 04:09:40.119243 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-09 04:09:40.119254 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-09 04:09:40.119266 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-09 04:09:40.119277 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-09 04:09:40.119288 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-09 04:09:40.119298 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-09 04:09:40.119309 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-09 04:09:40.119320 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-09 04:09:40.119331 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-09 04:09:40.119342 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-09 04:09:40.119352 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-09 04:09:40.119363 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-09 04:09:40.119386 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-09 04:09:40.119398 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-09 04:09:40.119409 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-09 04:09:40.119420 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-09 04:09:40.119431 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-09 04:09:40.119441 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-09 04:09:40.119452 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-09 04:09:40.119466 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-09 04:09:40.119479 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-09 04:09:40.119492 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-09 04:09:40.119504 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-09 04:09:40.119518 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-09 04:09:40.119539 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-09 04:09:40.119553 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-09 04:09:40.119586 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-09 04:09:40.119600 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-09 04:09:40.119619 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-09 04:09:40.119638 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-09 04:09:40.119657 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-09 04:09:40.119675 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-09 04:09:40.119693 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-09 04:09:40.119711 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-09 04:09:40.119730 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-09 04:09:40.119749 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-09 04:09:40.119768 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-09 04:09:40.119789 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-09 04:09:40.119807 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-09 04:09:40.119826 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-09 04:09:40.119839 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-09 04:09:40.119857 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-09 04:09:40.119869 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-09 04:09:40.119880 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-09 04:09:40.119891 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-09 04:09:40.119902 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-09 04:09:40.119912 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-09 04:09:40.119923 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-09 04:09:40.119934 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-09 04:09:40.119945 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-09 04:09:40.119964 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-09 04:09:40.120002 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-09 04:09:40.120015 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-09 04:09:40.120026 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-09 04:09:40.120037 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-09 04:09:40.120049 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-09 04:09:40.120060 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-09 04:09:40.120071 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-09 04:09:40.120091 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-09 04:09:40.120102 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-09 04:09:40.120113 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-09 04:09:40.120124 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-09 04:09:40.120135 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-09 04:09:40.120146 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-09 04:09:40.626283 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-09 04:09:40.626801 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-09 04:09:40.693633 | orchestrator | 2026-04-09 04:09:40.693715 | orchestrator | ## Containers @ testbed-node-1 2026-04-09 04:09:40.693730 | orchestrator | 2026-04-09 04:09:40.693738 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-09 04:09:40.693746 | orchestrator | + echo 2026-04-09 04:09:40.693754 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-04-09 04:09:40.693763 | orchestrator | + echo 2026-04-09 04:09:40.693771 | orchestrator | + osism container testbed-node-1 ps 2026-04-09 04:09:43.288247 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-09 04:09:43.288328 | orchestrator | 894edaeb8803 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-04-09 04:09:43.288340 | orchestrator | fde1c55805d6 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-04-09 04:09:43.288349 | orchestrator | df8f5e37cc26 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-04-09 04:09:43.288356 | orchestrator | 75fa3afeed46 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-04-09 04:09:43.288380 | orchestrator | 757907f4082f registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-04-09 04:09:43.288405 | orchestrator | 78e6675f7777 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-04-09 04:09:43.288414 | orchestrator | 028e65fe91a4 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-04-09 04:09:43.288425 | orchestrator | 189bab8a238c registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-09 04:09:43.288432 | orchestrator | a822781644c1 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-04-09 04:09:43.288440 | orchestrator | e618e566b123 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-04-09 04:09:43.288447 | orchestrator | 35cd1ae5bdb0 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-04-09 04:09:43.288454 | orchestrator | ce848b634754 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-04-09 04:09:43.288462 | orchestrator | 67f79fccf455 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-04-09 04:09:43.288469 | orchestrator | 9b1f9ca7c82e registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-04-09 04:09:43.288476 | orchestrator | d1342980059f registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-04-09 04:09:43.288483 | orchestrator | fb4915c60b8a registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-04-09 04:09:43.288491 | orchestrator | 1dfc57b29729 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-04-09 04:09:43.288498 | orchestrator | 88fcb9f9985e registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-04-09 04:09:43.288505 | orchestrator | 5676bc23e1fa registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-04-09 04:09:43.288526 | orchestrator | 8394457b569e registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-04-09 04:09:43.288533 | orchestrator | 93ccf6316a6f registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-04-09 04:09:43.288540 | orchestrator | ca3432af405a registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-04-09 04:09:43.288547 | orchestrator | ba072a1fd5c2 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-04-09 04:09:43.288560 | orchestrator | c938f19587d9 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-04-09 04:09:43.288567 | orchestrator | 3b15e57cc07f registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-04-09 04:09:43.288574 | orchestrator | f2f34ef19747 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-04-09 04:09:43.288581 | orchestrator | 021a6800c8f7 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-04-09 04:09:43.288593 | orchestrator | 3e50d5b0740c registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-04-09 04:09:43.288600 | orchestrator | 41858fc0c438 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-04-09 04:09:43.288607 | orchestrator | abec04a33cc2 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-04-09 04:09:43.288614 | orchestrator | c84d0282657d registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-04-09 04:09:43.288622 | orchestrator | b0d3b7ada7f9 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-04-09 04:09:43.288629 | orchestrator | 55271750f086 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-04-09 04:09:43.288636 | orchestrator | 8e1838855667 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-04-09 04:09:43.288644 | orchestrator | 5afe1059e28f registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-04-09 04:09:43.288651 | orchestrator | 8bac1a9d0135 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-04-09 04:09:43.288658 | orchestrator | a17ab7f73d55 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-04-09 04:09:43.288665 | orchestrator | bf3229b91c9b registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-04-09 04:09:43.288672 | orchestrator | 07e9a04d9c44 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_apiserver 2026-04-09 04:09:43.288686 | orchestrator | 5f40cf2196be registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) horizon 2026-04-09 04:09:43.288693 | orchestrator | 6d44df3090d5 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_novncproxy 2026-04-09 04:09:43.288704 | orchestrator | c96e9c6f4bb5 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_conductor 2026-04-09 04:09:43.288712 | orchestrator | 5b855ef4ea35 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_api 2026-04-09 04:09:43.288719 | orchestrator | 50623c776ba3 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_scheduler 2026-04-09 04:09:43.288726 | orchestrator | 21734c86463b registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 50 minutes ago Up 50 minutes (healthy) neutron_server 2026-04-09 04:09:43.288733 | orchestrator | 912061af4b41 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) placement_api 2026-04-09 04:09:43.288740 | orchestrator | cb570baf188e registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone 2026-04-09 04:09:43.288747 | orchestrator | e70ea558d06d registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone_fernet 2026-04-09 04:09:43.288755 | orchestrator | 2f90c14e1ab7 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone_ssh 2026-04-09 04:09:43.288762 | orchestrator | eac91840926a registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 59 minutes ago Up 59 minutes ceph-mgr-testbed-node-1 2026-04-09 04:09:43.288770 | orchestrator | 4ad21bd16783 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-04-09 04:09:43.288777 | orchestrator | 344b9fc03006 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-04-09 04:09:43.288784 | orchestrator | ff30e11e6895 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-04-09 04:09:43.288791 | orchestrator | 30960aed23a5 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-04-09 04:09:43.288802 | orchestrator | 7456c7843d9a registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-04-09 04:09:43.288811 | orchestrator | d02c97d6571e registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-04-09 04:09:43.288819 | orchestrator | c1ab8b9d3e21 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-04-09 04:09:43.288827 | orchestrator | 59af8c2c6695 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-04-09 04:09:43.288839 | orchestrator | 7d24e8011ad1 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-04-09 04:09:43.288852 | orchestrator | b0e3ffaa0ce7 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-04-09 04:09:43.288860 | orchestrator | a232effb52ca registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 2 hours ago Up About an hour (healthy) redis_sentinel 2026-04-09 04:09:43.288868 | orchestrator | f90dbe9c5c98 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis 2026-04-09 04:09:43.288876 | orchestrator | 854a4b1d3679 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) memcached 2026-04-09 04:09:43.288884 | orchestrator | 4565b5790077 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-04-09 04:09:43.288892 | orchestrator | b8a87acdd72f registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-04-09 04:09:43.288900 | orchestrator | 7dbd28980281 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-04-09 04:09:43.288908 | orchestrator | 959c2b1a75c7 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-04-09 04:09:43.288916 | orchestrator | 212b5d0467ce registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-04-09 04:09:43.288925 | orchestrator | 46726a1a0ba2 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-09 04:09:43.288933 | orchestrator | b6fe80875d74 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-09 04:09:43.288941 | orchestrator | 9c3052ad1875 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-09 04:09:43.775851 | orchestrator | 2026-04-09 04:09:43.775943 | orchestrator | ## Images @ testbed-node-1 2026-04-09 04:09:43.775959 | orchestrator | 2026-04-09 04:09:43.776030 | orchestrator | + echo 2026-04-09 04:09:43.776042 | orchestrator | + echo '## Images @ testbed-node-1' 2026-04-09 04:09:43.776055 | orchestrator | + echo 2026-04-09 04:09:43.776066 | orchestrator | + osism container testbed-node-1 images 2026-04-09 04:09:46.453902 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-09 04:09:46.454154 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-09 04:09:46.454179 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-09 04:09:46.454192 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-09 04:09:46.454205 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-09 04:09:46.454216 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-09 04:09:46.454250 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-09 04:09:46.454261 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-09 04:09:46.454272 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-09 04:09:46.454284 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-09 04:09:46.454311 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-09 04:09:46.454322 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-09 04:09:46.454333 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-09 04:09:46.454344 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-09 04:09:46.454355 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-09 04:09:46.454366 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-09 04:09:46.454377 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-09 04:09:46.454388 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-09 04:09:46.454400 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-09 04:09:46.454411 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-09 04:09:46.454440 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-09 04:09:46.454454 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-09 04:09:46.454468 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-09 04:09:46.454481 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-09 04:09:46.454494 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-09 04:09:46.454508 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-09 04:09:46.454529 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-09 04:09:46.454555 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-09 04:09:46.454573 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-09 04:09:46.454594 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-09 04:09:46.454614 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-09 04:09:46.454637 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-09 04:09:46.454691 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-09 04:09:46.454711 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-09 04:09:46.454730 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-09 04:09:46.454748 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-09 04:09:46.454766 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-09 04:09:46.454792 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-09 04:09:46.454813 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-09 04:09:46.454831 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-09 04:09:46.454849 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-09 04:09:46.454867 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-09 04:09:46.454885 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-09 04:09:46.454909 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-09 04:09:46.454933 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-09 04:09:46.454951 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-09 04:09:46.454999 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-09 04:09:46.455023 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-09 04:09:46.455050 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-09 04:09:46.455069 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-09 04:09:46.455087 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-09 04:09:46.455105 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-09 04:09:46.455123 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-09 04:09:46.455139 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-09 04:09:46.455156 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-09 04:09:46.455174 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-09 04:09:46.455193 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-09 04:09:46.455211 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-09 04:09:46.455247 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-09 04:09:46.455267 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-09 04:09:46.455285 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-09 04:09:46.455302 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-09 04:09:46.455319 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-09 04:09:46.455338 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-09 04:09:46.455371 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-09 04:09:46.455390 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-09 04:09:46.455406 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-09 04:09:46.455424 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-09 04:09:46.455442 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-09 04:09:46.455461 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-09 04:09:46.904071 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-09 04:09:46.905272 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-09 04:09:46.976714 | orchestrator | 2026-04-09 04:09:46.976825 | orchestrator | ## Containers @ testbed-node-2 2026-04-09 04:09:46.976843 | orchestrator | 2026-04-09 04:09:46.976855 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-09 04:09:46.976867 | orchestrator | + echo 2026-04-09 04:09:46.976880 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-04-09 04:09:46.976892 | orchestrator | + echo 2026-04-09 04:09:46.976904 | orchestrator | + osism container testbed-node-2 ps 2026-04-09 04:09:49.634210 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-09 04:09:49.634493 | orchestrator | 246a7376eb29 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-04-09 04:09:49.634524 | orchestrator | adfb0cfbb617 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-04-09 04:09:49.634537 | orchestrator | 4bc19cc9bc3f registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-04-09 04:09:49.634549 | orchestrator | a40297b07898 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-04-09 04:09:49.634563 | orchestrator | 5250111a65ce registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-04-09 04:09:49.634574 | orchestrator | 9e1222259a68 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-04-09 04:09:49.634586 | orchestrator | d41a6a1d102b registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-04-09 04:09:49.634622 | orchestrator | 341b05ccbf6e registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-09 04:09:49.634634 | orchestrator | 4444d4d2d62c registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-04-09 04:09:49.634646 | orchestrator | be7bb07f5b9b registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-04-09 04:09:49.634657 | orchestrator | 4c995b5f0e11 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-04-09 04:09:49.634673 | orchestrator | 268116c13402 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-04-09 04:09:49.634684 | orchestrator | ac6867eea27b registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-04-09 04:09:49.634696 | orchestrator | 0dd705c68008 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-04-09 04:09:49.634763 | orchestrator | 7753b703f9c5 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-04-09 04:09:49.634777 | orchestrator | eae588c1318e registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-04-09 04:09:49.634790 | orchestrator | 53ed93a21d53 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-04-09 04:09:49.634803 | orchestrator | b3a6da9a6c68 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-04-09 04:09:49.634816 | orchestrator | b06c3e44d5d0 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-04-09 04:09:49.634850 | orchestrator | e230611c2131 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-04-09 04:09:49.634864 | orchestrator | c90eb1febd77 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-04-09 04:09:49.634877 | orchestrator | f1fd213917eb registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-04-09 04:09:49.634890 | orchestrator | 5a80480ab69e registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-04-09 04:09:49.634903 | orchestrator | fcfee62fcaf0 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-04-09 04:09:49.634925 | orchestrator | 951473babfd4 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-04-09 04:09:49.634938 | orchestrator | 4e9b818440a6 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-04-09 04:09:49.634984 | orchestrator | 77bc39357e5f registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-04-09 04:09:49.634997 | orchestrator | 5076f4af6832 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-04-09 04:09:49.635010 | orchestrator | 20a9d6b0ce22 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-04-09 04:09:49.635168 | orchestrator | 46f6659ce21c registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-04-09 04:09:49.635183 | orchestrator | c0afe6186616 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_keystone_listener 2026-04-09 04:09:49.635195 | orchestrator | b99ebbe92c4e registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-04-09 04:09:49.635206 | orchestrator | 419c9e36990e registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-04-09 04:09:49.635217 | orchestrator | 351d6dfd7474 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 31 minutes (healthy) cinder_volume 2026-04-09 04:09:49.635228 | orchestrator | f84a7a620898 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-04-09 04:09:49.635239 | orchestrator | 727dcf3fb8ba registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-04-09 04:09:49.635250 | orchestrator | 25d8c08c84bc registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-04-09 04:09:49.635261 | orchestrator | f4c4b7997621 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-04-09 04:09:49.635280 | orchestrator | af850ed828aa registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_apiserver 2026-04-09 04:09:49.635291 | orchestrator | c764f5a0a4e5 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) horizon 2026-04-09 04:09:49.635303 | orchestrator | 77db3890e80d registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_novncproxy 2026-04-09 04:09:49.635314 | orchestrator | 6f380fa54a44 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_conductor 2026-04-09 04:09:49.635333 | orchestrator | 9c3d647f9fb1 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_api 2026-04-09 04:09:49.635345 | orchestrator | 17db267e9c48 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_scheduler 2026-04-09 04:09:49.635356 | orchestrator | 5ce4a1776905 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) neutron_server 2026-04-09 04:09:49.635367 | orchestrator | af449191ee8b registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) placement_api 2026-04-09 04:09:49.635378 | orchestrator | b96b4fa58096 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone 2026-04-09 04:09:49.635389 | orchestrator | 4a644ccfe18c registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone_fernet 2026-04-09 04:09:49.635400 | orchestrator | 614656af9c98 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone_ssh 2026-04-09 04:09:49.635411 | orchestrator | d342ee78b8c3 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 59 minutes ago Up 59 minutes ceph-mgr-testbed-node-2 2026-04-09 04:09:49.635430 | orchestrator | eeb365efb153 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-04-09 04:09:49.635442 | orchestrator | 66330ed4242e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-04-09 04:09:49.635457 | orchestrator | 371e277828d6 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-04-09 04:09:49.635469 | orchestrator | f5cb5627f0cc registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-04-09 04:09:49.635480 | orchestrator | 791fd1e74e46 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-04-09 04:09:49.635491 | orchestrator | bbe16c13c36c registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-04-09 04:09:49.635502 | orchestrator | 5c7e84768940 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-04-09 04:09:49.635513 | orchestrator | e7ced8d82295 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-04-09 04:09:49.635524 | orchestrator | 192867b548f2 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-04-09 04:09:49.635535 | orchestrator | c51416e8864a registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-04-09 04:09:49.635553 | orchestrator | d2be775e920c registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis_sentinel 2026-04-09 04:09:49.635564 | orchestrator | 619cd9fbbe70 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis 2026-04-09 04:09:49.635575 | orchestrator | 2f43a5013cd9 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) memcached 2026-04-09 04:09:49.635586 | orchestrator | bdd989ff1c22 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-04-09 04:09:49.635597 | orchestrator | 140434334eac registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-04-09 04:09:49.635608 | orchestrator | 114b44e1b8aa registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-04-09 04:09:49.635619 | orchestrator | 4805276a2060 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-04-09 04:09:49.635629 | orchestrator | b8b356f3b340 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-04-09 04:09:49.635640 | orchestrator | 6b7b4408b11a registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-09 04:09:49.635651 | orchestrator | 871397e9d8fa registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-09 04:09:49.635667 | orchestrator | 99e27259ccb4 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-09 04:09:50.150136 | orchestrator | 2026-04-09 04:09:50.150242 | orchestrator | ## Images @ testbed-node-2 2026-04-09 04:09:50.150260 | orchestrator | 2026-04-09 04:09:50.150273 | orchestrator | + echo 2026-04-09 04:09:50.150285 | orchestrator | + echo '## Images @ testbed-node-2' 2026-04-09 04:09:50.150298 | orchestrator | + echo 2026-04-09 04:09:50.150309 | orchestrator | + osism container testbed-node-2 images 2026-04-09 04:09:52.904268 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-09 04:09:52.904372 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-09 04:09:52.904387 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-09 04:09:52.904400 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-09 04:09:52.904411 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-09 04:09:52.904423 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-09 04:09:52.904434 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-09 04:09:52.904445 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-09 04:09:52.904478 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-09 04:09:52.904490 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-09 04:09:52.904501 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-09 04:09:52.904517 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-09 04:09:52.904529 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-09 04:09:52.904540 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-09 04:09:52.904551 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-09 04:09:52.904578 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-09 04:09:52.904590 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-09 04:09:52.904601 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-09 04:09:52.904612 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-09 04:09:52.904623 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-09 04:09:52.904634 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-09 04:09:52.904645 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-09 04:09:52.904656 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-09 04:09:52.904667 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-09 04:09:52.904678 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-09 04:09:52.904689 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-09 04:09:52.904700 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-09 04:09:52.904711 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-09 04:09:52.904722 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-09 04:09:52.904732 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-09 04:09:52.904743 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-09 04:09:52.904754 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-09 04:09:52.904783 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-09 04:09:52.904795 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-09 04:09:52.904819 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-09 04:09:52.904833 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-09 04:09:52.904846 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-09 04:09:52.904859 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-09 04:09:52.904873 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-09 04:09:52.904886 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-09 04:09:52.904898 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-09 04:09:52.904911 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-09 04:09:52.904924 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-09 04:09:52.904975 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-09 04:09:52.904988 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-09 04:09:52.905001 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-09 04:09:52.905014 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-09 04:09:52.905028 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-09 04:09:52.905041 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-09 04:09:52.905053 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-09 04:09:52.905066 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-09 04:09:52.905078 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-09 04:09:52.905092 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-09 04:09:52.905105 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-09 04:09:52.905118 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-09 04:09:52.905131 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-09 04:09:52.905144 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-09 04:09:52.905157 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-09 04:09:52.905171 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-09 04:09:52.905182 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-09 04:09:52.905200 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-09 04:09:52.905211 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-09 04:09:52.905222 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-09 04:09:52.905233 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-09 04:09:52.905251 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-09 04:09:52.905263 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-09 04:09:52.905274 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-09 04:09:52.905285 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-09 04:09:52.905296 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-09 04:09:52.905307 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-09 04:09:53.368875 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-04-09 04:09:53.374124 | orchestrator | + set -e 2026-04-09 04:09:53.374179 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 04:09:53.374185 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 04:09:53.374191 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 04:09:53.374195 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-09 04:09:53.374198 | orchestrator | ++ CEPH_VERSION=reef 2026-04-09 04:09:53.374203 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 04:09:53.374208 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 04:09:53.374212 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-09 04:09:53.374217 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-09 04:09:53.374221 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-09 04:09:53.374225 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-09 04:09:53.374229 | orchestrator | ++ export ARA=false 2026-04-09 04:09:53.374233 | orchestrator | ++ ARA=false 2026-04-09 04:09:53.374237 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 04:09:53.374241 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 04:09:53.374244 | orchestrator | ++ export TEMPEST=false 2026-04-09 04:09:53.374248 | orchestrator | ++ TEMPEST=false 2026-04-09 04:09:53.374252 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 04:09:53.374256 | orchestrator | ++ IS_ZUUL=true 2026-04-09 04:09:53.374260 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 04:09:53.374264 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 04:09:53.374268 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 04:09:53.374271 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 04:09:53.374275 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 04:09:53.374279 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 04:09:53.374283 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 04:09:53.374287 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 04:09:53.374291 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 04:09:53.374295 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 04:09:53.374298 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-09 04:09:53.374302 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-04-09 04:09:53.385587 | orchestrator | + set -e 2026-04-09 04:09:53.385653 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 04:09:53.385666 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 04:09:53.385676 | orchestrator | ++ INTERACTIVE=false 2026-04-09 04:09:53.385683 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 04:09:53.385690 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 04:09:53.385698 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-09 04:09:53.385969 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-09 04:09:53.389450 | orchestrator | 2026-04-09 04:09:53.389485 | orchestrator | # Ceph status 2026-04-09 04:09:53.389490 | orchestrator | 2026-04-09 04:09:53.389494 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-09 04:09:53.389499 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-09 04:09:53.389503 | orchestrator | + echo 2026-04-09 04:09:53.389507 | orchestrator | + echo '# Ceph status' 2026-04-09 04:09:53.389511 | orchestrator | + echo 2026-04-09 04:09:53.389515 | orchestrator | + ceph -s 2026-04-09 04:09:54.089547 | orchestrator | cluster: 2026-04-09 04:09:54.089669 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-04-09 04:09:54.089694 | orchestrator | health: HEALTH_OK 2026-04-09 04:09:54.089708 | orchestrator | 2026-04-09 04:09:54.089722 | orchestrator | services: 2026-04-09 04:09:54.089743 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 72m) 2026-04-09 04:09:54.089780 | orchestrator | mgr: testbed-node-1(active, since 59m), standbys: testbed-node-2, testbed-node-0 2026-04-09 04:09:54.089794 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-04-09 04:09:54.089806 | orchestrator | osd: 6 osds: 6 up (since 68m), 6 in (since 69m) 2026-04-09 04:09:54.089827 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-04-09 04:09:54.089846 | orchestrator | 2026-04-09 04:09:54.089864 | orchestrator | data: 2026-04-09 04:09:54.089878 | orchestrator | volumes: 1/1 healthy 2026-04-09 04:09:54.089890 | orchestrator | pools: 14 pools, 401 pgs 2026-04-09 04:09:54.089907 | orchestrator | objects: 554 objects, 2.2 GiB 2026-04-09 04:09:54.089927 | orchestrator | usage: 7.0 GiB used, 113 GiB / 120 GiB avail 2026-04-09 04:09:54.090006 | orchestrator | pgs: 401 active+clean 2026-04-09 04:09:54.090101 | orchestrator | 2026-04-09 04:09:54.144192 | orchestrator | 2026-04-09 04:09:54.144288 | orchestrator | # Ceph versions 2026-04-09 04:09:54.144305 | orchestrator | 2026-04-09 04:09:54.144319 | orchestrator | + echo 2026-04-09 04:09:54.144333 | orchestrator | + echo '# Ceph versions' 2026-04-09 04:09:54.144348 | orchestrator | + echo 2026-04-09 04:09:54.144362 | orchestrator | + ceph versions 2026-04-09 04:09:54.758666 | orchestrator | { 2026-04-09 04:09:54.758756 | orchestrator | "mon": { 2026-04-09 04:09:54.758768 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-09 04:09:54.758777 | orchestrator | }, 2026-04-09 04:09:54.758784 | orchestrator | "mgr": { 2026-04-09 04:09:54.758792 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-09 04:09:54.758799 | orchestrator | }, 2026-04-09 04:09:54.758806 | orchestrator | "osd": { 2026-04-09 04:09:54.758813 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-04-09 04:09:54.758818 | orchestrator | }, 2026-04-09 04:09:54.758822 | orchestrator | "mds": { 2026-04-09 04:09:54.758827 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-09 04:09:54.758831 | orchestrator | }, 2026-04-09 04:09:54.758835 | orchestrator | "rgw": { 2026-04-09 04:09:54.758843 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-09 04:09:54.758850 | orchestrator | }, 2026-04-09 04:09:54.758857 | orchestrator | "overall": { 2026-04-09 04:09:54.758884 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-04-09 04:09:54.758892 | orchestrator | } 2026-04-09 04:09:54.758899 | orchestrator | } 2026-04-09 04:09:54.816475 | orchestrator | 2026-04-09 04:09:54.816563 | orchestrator | # Ceph OSD tree 2026-04-09 04:09:54.816578 | orchestrator | 2026-04-09 04:09:54.816590 | orchestrator | + echo 2026-04-09 04:09:54.816601 | orchestrator | + echo '# Ceph OSD tree' 2026-04-09 04:09:54.816613 | orchestrator | + echo 2026-04-09 04:09:54.816624 | orchestrator | + ceph osd df tree 2026-04-09 04:09:55.389276 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-04-09 04:09:55.389345 | orchestrator | -1 0.11691 - 120 GiB 7.0 GiB 6.7 GiB 6 KiB 369 MiB 113 GiB 5.87 1.00 - root default 2026-04-09 04:09:55.389352 | orchestrator | -5 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-3 2026-04-09 04:09:55.389356 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 985 MiB 923 MiB 1 KiB 62 MiB 19 GiB 4.81 0.82 176 up osd.1 2026-04-09 04:09:55.389360 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 62 MiB 19 GiB 6.92 1.18 216 up osd.3 2026-04-09 04:09:55.389381 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-4 2026-04-09 04:09:55.389396 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 62 MiB 18 GiB 7.55 1.29 200 up osd.0 2026-04-09 04:09:55.389400 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 856 MiB 795 MiB 1 KiB 62 MiB 19 GiB 4.18 0.71 190 up osd.4 2026-04-09 04:09:55.389403 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-5 2026-04-09 04:09:55.389408 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 62 MiB 19 GiB 7.04 1.20 191 up osd.2 2026-04-09 04:09:55.389412 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 961 MiB 899 MiB 1 KiB 62 MiB 19 GiB 4.70 0.80 197 up osd.5 2026-04-09 04:09:55.389416 | orchestrator | TOTAL 120 GiB 7.0 GiB 6.7 GiB 9.3 KiB 369 MiB 113 GiB 5.87 2026-04-09 04:09:55.389420 | orchestrator | MIN/MAX VAR: 0.71/1.29 STDDEV: 1.33 2026-04-09 04:09:55.444088 | orchestrator | 2026-04-09 04:09:55.444175 | orchestrator | # Ceph monitor status 2026-04-09 04:09:55.444189 | orchestrator | 2026-04-09 04:09:55.444200 | orchestrator | + echo 2026-04-09 04:09:55.444210 | orchestrator | + echo '# Ceph monitor status' 2026-04-09 04:09:55.444220 | orchestrator | + echo 2026-04-09 04:09:55.444230 | orchestrator | + ceph mon stat 2026-04-09 04:09:56.067816 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.8:3300/0,v1:192.168.16.8:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-04-09 04:09:56.135000 | orchestrator | 2026-04-09 04:09:56.135110 | orchestrator | # Ceph quorum status 2026-04-09 04:09:56.135127 | orchestrator | 2026-04-09 04:09:56.135140 | orchestrator | + echo 2026-04-09 04:09:56.135151 | orchestrator | + echo '# Ceph quorum status' 2026-04-09 04:09:56.135163 | orchestrator | + echo 2026-04-09 04:09:56.135858 | orchestrator | + jq 2026-04-09 04:09:56.135882 | orchestrator | + ceph quorum_status 2026-04-09 04:09:56.831648 | orchestrator | { 2026-04-09 04:09:56.831800 | orchestrator | "election_epoch": 6, 2026-04-09 04:09:56.831833 | orchestrator | "quorum": [ 2026-04-09 04:09:56.832833 | orchestrator | 0, 2026-04-09 04:09:56.832894 | orchestrator | 1, 2026-04-09 04:09:56.832908 | orchestrator | 2 2026-04-09 04:09:56.832919 | orchestrator | ], 2026-04-09 04:09:56.832962 | orchestrator | "quorum_names": [ 2026-04-09 04:09:56.832971 | orchestrator | "testbed-node-0", 2026-04-09 04:09:56.832977 | orchestrator | "testbed-node-1", 2026-04-09 04:09:56.832983 | orchestrator | "testbed-node-2" 2026-04-09 04:09:56.832990 | orchestrator | ], 2026-04-09 04:09:56.832997 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-04-09 04:09:56.833005 | orchestrator | "quorum_age": 4323, 2026-04-09 04:09:56.833012 | orchestrator | "features": { 2026-04-09 04:09:56.833018 | orchestrator | "quorum_con": "4540138322906710015", 2026-04-09 04:09:56.833025 | orchestrator | "quorum_mon": [ 2026-04-09 04:09:56.833031 | orchestrator | "kraken", 2026-04-09 04:09:56.833038 | orchestrator | "luminous", 2026-04-09 04:09:56.833045 | orchestrator | "mimic", 2026-04-09 04:09:56.833051 | orchestrator | "osdmap-prune", 2026-04-09 04:09:56.833057 | orchestrator | "nautilus", 2026-04-09 04:09:56.833063 | orchestrator | "octopus", 2026-04-09 04:09:56.833070 | orchestrator | "pacific", 2026-04-09 04:09:56.833076 | orchestrator | "elector-pinging", 2026-04-09 04:09:56.833082 | orchestrator | "quincy", 2026-04-09 04:09:56.833089 | orchestrator | "reef" 2026-04-09 04:09:56.833095 | orchestrator | ] 2026-04-09 04:09:56.833101 | orchestrator | }, 2026-04-09 04:09:56.833108 | orchestrator | "monmap": { 2026-04-09 04:09:56.833114 | orchestrator | "epoch": 1, 2026-04-09 04:09:56.833120 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-04-09 04:09:56.833129 | orchestrator | "modified": "2026-04-09T02:57:31.386456Z", 2026-04-09 04:09:56.833136 | orchestrator | "created": "2026-04-09T02:57:31.386456Z", 2026-04-09 04:09:56.833142 | orchestrator | "min_mon_release": 18, 2026-04-09 04:09:56.833148 | orchestrator | "min_mon_release_name": "reef", 2026-04-09 04:09:56.833155 | orchestrator | "election_strategy": 1, 2026-04-09 04:09:56.833177 | orchestrator | "disallowed_leaders: ": "", 2026-04-09 04:09:56.833183 | orchestrator | "stretch_mode": false, 2026-04-09 04:09:56.833231 | orchestrator | "tiebreaker_mon": "", 2026-04-09 04:09:56.833238 | orchestrator | "removed_ranks: ": "", 2026-04-09 04:09:56.833244 | orchestrator | "features": { 2026-04-09 04:09:56.833251 | orchestrator | "persistent": [ 2026-04-09 04:09:56.833257 | orchestrator | "kraken", 2026-04-09 04:09:56.833263 | orchestrator | "luminous", 2026-04-09 04:09:56.833291 | orchestrator | "mimic", 2026-04-09 04:09:56.833297 | orchestrator | "osdmap-prune", 2026-04-09 04:09:56.833303 | orchestrator | "nautilus", 2026-04-09 04:09:56.833309 | orchestrator | "octopus", 2026-04-09 04:09:56.833316 | orchestrator | "pacific", 2026-04-09 04:09:56.833322 | orchestrator | "elector-pinging", 2026-04-09 04:09:56.833328 | orchestrator | "quincy", 2026-04-09 04:09:56.833335 | orchestrator | "reef" 2026-04-09 04:09:56.833341 | orchestrator | ], 2026-04-09 04:09:56.833347 | orchestrator | "optional": [] 2026-04-09 04:09:56.833354 | orchestrator | }, 2026-04-09 04:09:56.833360 | orchestrator | "mons": [ 2026-04-09 04:09:56.833367 | orchestrator | { 2026-04-09 04:09:56.833381 | orchestrator | "rank": 0, 2026-04-09 04:09:56.833387 | orchestrator | "name": "testbed-node-0", 2026-04-09 04:09:56.833394 | orchestrator | "public_addrs": { 2026-04-09 04:09:56.833400 | orchestrator | "addrvec": [ 2026-04-09 04:09:56.833406 | orchestrator | { 2026-04-09 04:09:56.833413 | orchestrator | "type": "v2", 2026-04-09 04:09:56.833420 | orchestrator | "addr": "192.168.16.8:3300", 2026-04-09 04:09:56.833426 | orchestrator | "nonce": 0 2026-04-09 04:09:56.833432 | orchestrator | }, 2026-04-09 04:09:56.833439 | orchestrator | { 2026-04-09 04:09:56.833445 | orchestrator | "type": "v1", 2026-04-09 04:09:56.833451 | orchestrator | "addr": "192.168.16.8:6789", 2026-04-09 04:09:56.833458 | orchestrator | "nonce": 0 2026-04-09 04:09:56.833464 | orchestrator | } 2026-04-09 04:09:56.833470 | orchestrator | ] 2026-04-09 04:09:56.833476 | orchestrator | }, 2026-04-09 04:09:56.833483 | orchestrator | "addr": "192.168.16.8:6789/0", 2026-04-09 04:09:56.833489 | orchestrator | "public_addr": "192.168.16.8:6789/0", 2026-04-09 04:09:56.833495 | orchestrator | "priority": 0, 2026-04-09 04:09:56.833501 | orchestrator | "weight": 0, 2026-04-09 04:09:56.833508 | orchestrator | "crush_location": "{}" 2026-04-09 04:09:56.833514 | orchestrator | }, 2026-04-09 04:09:56.833520 | orchestrator | { 2026-04-09 04:09:56.833526 | orchestrator | "rank": 1, 2026-04-09 04:09:56.833533 | orchestrator | "name": "testbed-node-1", 2026-04-09 04:09:56.833539 | orchestrator | "public_addrs": { 2026-04-09 04:09:56.833545 | orchestrator | "addrvec": [ 2026-04-09 04:09:56.833552 | orchestrator | { 2026-04-09 04:09:56.833558 | orchestrator | "type": "v2", 2026-04-09 04:09:56.833564 | orchestrator | "addr": "192.168.16.11:3300", 2026-04-09 04:09:56.833570 | orchestrator | "nonce": 0 2026-04-09 04:09:56.833577 | orchestrator | }, 2026-04-09 04:09:56.833583 | orchestrator | { 2026-04-09 04:09:56.833589 | orchestrator | "type": "v1", 2026-04-09 04:09:56.833595 | orchestrator | "addr": "192.168.16.11:6789", 2026-04-09 04:09:56.833601 | orchestrator | "nonce": 0 2026-04-09 04:09:56.833608 | orchestrator | } 2026-04-09 04:09:56.833614 | orchestrator | ] 2026-04-09 04:09:56.833620 | orchestrator | }, 2026-04-09 04:09:56.833627 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-04-09 04:09:56.833633 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-04-09 04:09:56.833639 | orchestrator | "priority": 0, 2026-04-09 04:09:56.833646 | orchestrator | "weight": 0, 2026-04-09 04:09:56.833652 | orchestrator | "crush_location": "{}" 2026-04-09 04:09:56.833658 | orchestrator | }, 2026-04-09 04:09:56.833664 | orchestrator | { 2026-04-09 04:09:56.833671 | orchestrator | "rank": 2, 2026-04-09 04:09:56.833677 | orchestrator | "name": "testbed-node-2", 2026-04-09 04:09:56.833683 | orchestrator | "public_addrs": { 2026-04-09 04:09:56.833689 | orchestrator | "addrvec": [ 2026-04-09 04:09:56.833696 | orchestrator | { 2026-04-09 04:09:56.833702 | orchestrator | "type": "v2", 2026-04-09 04:09:56.833708 | orchestrator | "addr": "192.168.16.12:3300", 2026-04-09 04:09:56.833714 | orchestrator | "nonce": 0 2026-04-09 04:09:56.833721 | orchestrator | }, 2026-04-09 04:09:56.833727 | orchestrator | { 2026-04-09 04:09:56.833733 | orchestrator | "type": "v1", 2026-04-09 04:09:56.833740 | orchestrator | "addr": "192.168.16.12:6789", 2026-04-09 04:09:56.833746 | orchestrator | "nonce": 0 2026-04-09 04:09:56.833758 | orchestrator | } 2026-04-09 04:09:56.833764 | orchestrator | ] 2026-04-09 04:09:56.833771 | orchestrator | }, 2026-04-09 04:09:56.833777 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-04-09 04:09:56.833783 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-04-09 04:09:56.833789 | orchestrator | "priority": 0, 2026-04-09 04:09:56.833796 | orchestrator | "weight": 0, 2026-04-09 04:09:56.833813 | orchestrator | "crush_location": "{}" 2026-04-09 04:09:56.833820 | orchestrator | } 2026-04-09 04:09:56.833827 | orchestrator | ] 2026-04-09 04:09:56.833833 | orchestrator | } 2026-04-09 04:09:56.833839 | orchestrator | } 2026-04-09 04:09:56.833846 | orchestrator | 2026-04-09 04:09:56.833852 | orchestrator | # Ceph free space status 2026-04-09 04:09:56.833859 | orchestrator | 2026-04-09 04:09:56.833865 | orchestrator | + echo 2026-04-09 04:09:56.833872 | orchestrator | + echo '# Ceph free space status' 2026-04-09 04:09:56.833878 | orchestrator | + echo 2026-04-09 04:09:56.833884 | orchestrator | + ceph df 2026-04-09 04:09:57.549985 | orchestrator | --- RAW STORAGE --- 2026-04-09 04:09:57.550153 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-04-09 04:09:57.550179 | orchestrator | hdd 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.87 2026-04-09 04:09:57.550189 | orchestrator | TOTAL 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.87 2026-04-09 04:09:57.550199 | orchestrator | 2026-04-09 04:09:57.550209 | orchestrator | --- POOLS --- 2026-04-09 04:09:57.550220 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-04-09 04:09:57.550237 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-04-09 04:09:57.550251 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-04-09 04:09:57.550267 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-04-09 04:09:57.550281 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-04-09 04:09:57.550295 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-04-09 04:09:57.550311 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-04-09 04:09:57.550326 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-04-09 04:09:57.550337 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-04-09 04:09:57.550400 | orchestrator | .rgw.root 9 32 2.6 KiB 6 48 KiB 0 52 GiB 2026-04-09 04:09:57.550410 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-04-09 04:09:57.550419 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-04-09 04:09:57.550428 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.97 35 GiB 2026-04-09 04:09:57.550437 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-04-09 04:09:57.550445 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-04-09 04:09:57.606748 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-09 04:09:57.683862 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-09 04:09:57.684051 | orchestrator | + osism apply facts 2026-04-09 04:09:59.913042 | orchestrator | 2026-04-09 04:09:59 | INFO  | Task 83351513-286c-4fda-9b4e-15a3882c5610 (facts) was prepared for execution. 2026-04-09 04:09:59.913150 | orchestrator | 2026-04-09 04:09:59 | INFO  | It takes a moment until task 83351513-286c-4fda-9b4e-15a3882c5610 (facts) has been started and output is visible here. 2026-04-09 04:10:14.421130 | orchestrator | 2026-04-09 04:10:14.421248 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-09 04:10:14.421266 | orchestrator | 2026-04-09 04:10:14.421279 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-09 04:10:14.421291 | orchestrator | Thursday 09 April 2026 04:10:04 +0000 (0:00:00.314) 0:00:00.315 ******** 2026-04-09 04:10:14.421302 | orchestrator | ok: [testbed-manager] 2026-04-09 04:10:14.421314 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:10:14.421325 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:10:14.421335 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:10:14.421346 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:10:14.421356 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:10:14.421391 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:10:14.421403 | orchestrator | 2026-04-09 04:10:14.421414 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-09 04:10:14.421425 | orchestrator | Thursday 09 April 2026 04:10:06 +0000 (0:00:01.292) 0:00:01.607 ******** 2026-04-09 04:10:14.421437 | orchestrator | skipping: [testbed-manager] 2026-04-09 04:10:14.421449 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:10:14.421460 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:10:14.421471 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:10:14.421482 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:10:14.421493 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:10:14.421504 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:10:14.421514 | orchestrator | 2026-04-09 04:10:14.421525 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-09 04:10:14.421536 | orchestrator | 2026-04-09 04:10:14.421547 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 04:10:14.421558 | orchestrator | Thursday 09 April 2026 04:10:07 +0000 (0:00:01.377) 0:00:02.984 ******** 2026-04-09 04:10:14.421570 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:10:14.421580 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:10:14.421591 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:10:14.421602 | orchestrator | ok: [testbed-manager] 2026-04-09 04:10:14.421613 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:10:14.421624 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:10:14.421634 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:10:14.421645 | orchestrator | 2026-04-09 04:10:14.421656 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-09 04:10:14.421670 | orchestrator | 2026-04-09 04:10:14.421682 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-09 04:10:14.421696 | orchestrator | Thursday 09 April 2026 04:10:13 +0000 (0:00:05.614) 0:00:08.599 ******** 2026-04-09 04:10:14.421709 | orchestrator | skipping: [testbed-manager] 2026-04-09 04:10:14.421721 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:10:14.421734 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:10:14.421746 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:10:14.421759 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:10:14.421771 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:10:14.421783 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:10:14.421795 | orchestrator | 2026-04-09 04:10:14.421808 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 04:10:14.421822 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 04:10:14.421836 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 04:10:14.421849 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 04:10:14.421861 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 04:10:14.421919 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 04:10:14.421933 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 04:10:14.421946 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 04:10:14.421958 | orchestrator | 2026-04-09 04:10:14.421970 | orchestrator | 2026-04-09 04:10:14.421983 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 04:10:14.421996 | orchestrator | Thursday 09 April 2026 04:10:13 +0000 (0:00:00.750) 0:00:09.349 ******** 2026-04-09 04:10:14.422080 | orchestrator | =============================================================================== 2026-04-09 04:10:14.422096 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.61s 2026-04-09 04:10:14.422106 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.38s 2026-04-09 04:10:14.422117 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.29s 2026-04-09 04:10:14.422128 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.75s 2026-04-09 04:10:14.795694 | orchestrator | + osism validate ceph-mons 2026-04-09 04:10:47.959664 | orchestrator | 2026-04-09 04:10:47.959763 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-04-09 04:10:47.959776 | orchestrator | 2026-04-09 04:10:47.959829 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-09 04:10:47.959857 | orchestrator | Thursday 09 April 2026 04:10:31 +0000 (0:00:00.451) 0:00:00.451 ******** 2026-04-09 04:10:47.959881 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 04:10:47.959891 | orchestrator | 2026-04-09 04:10:47.959899 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-09 04:10:47.959909 | orchestrator | Thursday 09 April 2026 04:10:32 +0000 (0:00:00.905) 0:00:01.357 ******** 2026-04-09 04:10:47.959918 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 04:10:47.959927 | orchestrator | 2026-04-09 04:10:47.959936 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-09 04:10:47.959945 | orchestrator | Thursday 09 April 2026 04:10:33 +0000 (0:00:01.039) 0:00:02.397 ******** 2026-04-09 04:10:47.959954 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:10:47.959964 | orchestrator | 2026-04-09 04:10:47.959973 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-09 04:10:47.959981 | orchestrator | Thursday 09 April 2026 04:10:33 +0000 (0:00:00.132) 0:00:02.530 ******** 2026-04-09 04:10:47.959990 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:10:47.959999 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:10:47.960008 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:10:47.960016 | orchestrator | 2026-04-09 04:10:47.960025 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-09 04:10:47.960034 | orchestrator | Thursday 09 April 2026 04:10:34 +0000 (0:00:00.313) 0:00:02.843 ******** 2026-04-09 04:10:47.960043 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:10:47.960052 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:10:47.960060 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:10:47.960069 | orchestrator | 2026-04-09 04:10:47.960078 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-09 04:10:47.960086 | orchestrator | Thursday 09 April 2026 04:10:35 +0000 (0:00:01.021) 0:00:03.864 ******** 2026-04-09 04:10:47.960095 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:10:47.960104 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:10:47.960113 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:10:47.960122 | orchestrator | 2026-04-09 04:10:47.960130 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-09 04:10:47.960139 | orchestrator | Thursday 09 April 2026 04:10:35 +0000 (0:00:00.307) 0:00:04.172 ******** 2026-04-09 04:10:47.960148 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:10:47.960157 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:10:47.960165 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:10:47.960174 | orchestrator | 2026-04-09 04:10:47.960183 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 04:10:47.960191 | orchestrator | Thursday 09 April 2026 04:10:36 +0000 (0:00:00.535) 0:00:04.708 ******** 2026-04-09 04:10:47.960200 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:10:47.960209 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:10:47.960217 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:10:47.960226 | orchestrator | 2026-04-09 04:10:47.960234 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-04-09 04:10:47.960263 | orchestrator | Thursday 09 April 2026 04:10:36 +0000 (0:00:00.324) 0:00:05.033 ******** 2026-04-09 04:10:47.960272 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:10:47.960281 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:10:47.960290 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:10:47.960298 | orchestrator | 2026-04-09 04:10:47.960307 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-04-09 04:10:47.960316 | orchestrator | Thursday 09 April 2026 04:10:36 +0000 (0:00:00.310) 0:00:05.344 ******** 2026-04-09 04:10:47.960324 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:10:47.960333 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:10:47.960341 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:10:47.960350 | orchestrator | 2026-04-09 04:10:47.960359 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-09 04:10:47.960372 | orchestrator | Thursday 09 April 2026 04:10:37 +0000 (0:00:00.540) 0:00:05.884 ******** 2026-04-09 04:10:47.960381 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:10:47.960389 | orchestrator | 2026-04-09 04:10:47.960398 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-09 04:10:47.960407 | orchestrator | Thursday 09 April 2026 04:10:37 +0000 (0:00:00.276) 0:00:06.160 ******** 2026-04-09 04:10:47.960415 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:10:47.960424 | orchestrator | 2026-04-09 04:10:47.960432 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-09 04:10:47.960441 | orchestrator | Thursday 09 April 2026 04:10:37 +0000 (0:00:00.255) 0:00:06.416 ******** 2026-04-09 04:10:47.960449 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:10:47.960458 | orchestrator | 2026-04-09 04:10:47.960466 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 04:10:47.960475 | orchestrator | Thursday 09 April 2026 04:10:38 +0000 (0:00:00.262) 0:00:06.678 ******** 2026-04-09 04:10:47.960484 | orchestrator | 2026-04-09 04:10:47.960492 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 04:10:47.960501 | orchestrator | Thursday 09 April 2026 04:10:38 +0000 (0:00:00.070) 0:00:06.749 ******** 2026-04-09 04:10:47.960510 | orchestrator | 2026-04-09 04:10:47.960518 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 04:10:47.960527 | orchestrator | Thursday 09 April 2026 04:10:38 +0000 (0:00:00.074) 0:00:06.824 ******** 2026-04-09 04:10:47.960535 | orchestrator | 2026-04-09 04:10:47.960544 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-09 04:10:47.960553 | orchestrator | Thursday 09 April 2026 04:10:38 +0000 (0:00:00.090) 0:00:06.915 ******** 2026-04-09 04:10:47.960561 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:10:47.960570 | orchestrator | 2026-04-09 04:10:47.960578 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-09 04:10:47.960587 | orchestrator | Thursday 09 April 2026 04:10:38 +0000 (0:00:00.259) 0:00:07.174 ******** 2026-04-09 04:10:47.960596 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:10:47.960604 | orchestrator | 2026-04-09 04:10:47.960627 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-04-09 04:10:47.960636 | orchestrator | Thursday 09 April 2026 04:10:38 +0000 (0:00:00.253) 0:00:07.428 ******** 2026-04-09 04:10:47.960645 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:10:47.960654 | orchestrator | 2026-04-09 04:10:47.960663 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-04-09 04:10:47.960672 | orchestrator | Thursday 09 April 2026 04:10:38 +0000 (0:00:00.126) 0:00:07.554 ******** 2026-04-09 04:10:47.960680 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:10:47.960689 | orchestrator | 2026-04-09 04:10:47.960702 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-04-09 04:10:47.960711 | orchestrator | Thursday 09 April 2026 04:10:40 +0000 (0:00:01.577) 0:00:09.132 ******** 2026-04-09 04:10:47.960719 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:10:47.960728 | orchestrator | 2026-04-09 04:10:47.960743 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-04-09 04:10:47.960752 | orchestrator | Thursday 09 April 2026 04:10:41 +0000 (0:00:00.532) 0:00:09.665 ******** 2026-04-09 04:10:47.960761 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:10:47.960770 | orchestrator | 2026-04-09 04:10:47.960778 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-04-09 04:10:47.960805 | orchestrator | Thursday 09 April 2026 04:10:41 +0000 (0:00:00.172) 0:00:09.837 ******** 2026-04-09 04:10:47.960814 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:10:47.960823 | orchestrator | 2026-04-09 04:10:47.960832 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-04-09 04:10:47.960841 | orchestrator | Thursday 09 April 2026 04:10:41 +0000 (0:00:00.336) 0:00:10.174 ******** 2026-04-09 04:10:47.960849 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:10:47.960858 | orchestrator | 2026-04-09 04:10:47.960867 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-04-09 04:10:47.960876 | orchestrator | Thursday 09 April 2026 04:10:41 +0000 (0:00:00.324) 0:00:10.498 ******** 2026-04-09 04:10:47.960884 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:10:47.960893 | orchestrator | 2026-04-09 04:10:47.960902 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-04-09 04:10:47.960910 | orchestrator | Thursday 09 April 2026 04:10:41 +0000 (0:00:00.125) 0:00:10.623 ******** 2026-04-09 04:10:47.960919 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:10:47.960928 | orchestrator | 2026-04-09 04:10:47.960937 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-04-09 04:10:47.960945 | orchestrator | Thursday 09 April 2026 04:10:42 +0000 (0:00:00.149) 0:00:10.772 ******** 2026-04-09 04:10:47.960993 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:10:47.961004 | orchestrator | 2026-04-09 04:10:47.961013 | orchestrator | TASK [Gather status data] ****************************************************** 2026-04-09 04:10:47.961022 | orchestrator | Thursday 09 April 2026 04:10:42 +0000 (0:00:00.122) 0:00:10.894 ******** 2026-04-09 04:10:47.961030 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:10:47.961039 | orchestrator | 2026-04-09 04:10:47.961048 | orchestrator | TASK [Set health test data] **************************************************** 2026-04-09 04:10:47.961056 | orchestrator | Thursday 09 April 2026 04:10:43 +0000 (0:00:01.320) 0:00:12.215 ******** 2026-04-09 04:10:47.961065 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:10:47.961073 | orchestrator | 2026-04-09 04:10:47.961082 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-04-09 04:10:47.961091 | orchestrator | Thursday 09 April 2026 04:10:43 +0000 (0:00:00.375) 0:00:12.590 ******** 2026-04-09 04:10:47.961099 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:10:47.961108 | orchestrator | 2026-04-09 04:10:47.961117 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-04-09 04:10:47.961125 | orchestrator | Thursday 09 April 2026 04:10:44 +0000 (0:00:00.165) 0:00:12.756 ******** 2026-04-09 04:10:47.961134 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:10:47.961142 | orchestrator | 2026-04-09 04:10:47.961151 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-04-09 04:10:47.961164 | orchestrator | Thursday 09 April 2026 04:10:44 +0000 (0:00:00.160) 0:00:12.917 ******** 2026-04-09 04:10:47.961174 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:10:47.961182 | orchestrator | 2026-04-09 04:10:47.961191 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-04-09 04:10:47.961200 | orchestrator | Thursday 09 April 2026 04:10:44 +0000 (0:00:00.158) 0:00:13.076 ******** 2026-04-09 04:10:47.961209 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:10:47.961217 | orchestrator | 2026-04-09 04:10:47.961226 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-09 04:10:47.961235 | orchestrator | Thursday 09 April 2026 04:10:44 +0000 (0:00:00.334) 0:00:13.410 ******** 2026-04-09 04:10:47.961243 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 04:10:47.961252 | orchestrator | 2026-04-09 04:10:47.961267 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-09 04:10:47.961276 | orchestrator | Thursday 09 April 2026 04:10:45 +0000 (0:00:00.296) 0:00:13.707 ******** 2026-04-09 04:10:47.961284 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:10:47.961293 | orchestrator | 2026-04-09 04:10:47.961301 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-09 04:10:47.961310 | orchestrator | Thursday 09 April 2026 04:10:45 +0000 (0:00:00.279) 0:00:13.986 ******** 2026-04-09 04:10:47.961319 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 04:10:47.961327 | orchestrator | 2026-04-09 04:10:47.961336 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-09 04:10:47.961345 | orchestrator | Thursday 09 April 2026 04:10:47 +0000 (0:00:01.817) 0:00:15.804 ******** 2026-04-09 04:10:47.961353 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 04:10:47.961362 | orchestrator | 2026-04-09 04:10:47.961370 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-09 04:10:47.961379 | orchestrator | Thursday 09 April 2026 04:10:47 +0000 (0:00:00.293) 0:00:16.097 ******** 2026-04-09 04:10:47.961388 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 04:10:47.961397 | orchestrator | 2026-04-09 04:10:47.961411 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 04:10:50.844126 | orchestrator | Thursday 09 April 2026 04:10:47 +0000 (0:00:00.259) 0:00:16.357 ******** 2026-04-09 04:10:50.844252 | orchestrator | 2026-04-09 04:10:50.844276 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 04:10:50.844294 | orchestrator | Thursday 09 April 2026 04:10:47 +0000 (0:00:00.073) 0:00:16.430 ******** 2026-04-09 04:10:50.844312 | orchestrator | 2026-04-09 04:10:50.844329 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 04:10:50.844347 | orchestrator | Thursday 09 April 2026 04:10:47 +0000 (0:00:00.071) 0:00:16.502 ******** 2026-04-09 04:10:50.844364 | orchestrator | 2026-04-09 04:10:50.844380 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-09 04:10:50.844393 | orchestrator | Thursday 09 April 2026 04:10:47 +0000 (0:00:00.074) 0:00:16.576 ******** 2026-04-09 04:10:50.844404 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 04:10:50.844414 | orchestrator | 2026-04-09 04:10:50.844424 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-09 04:10:50.844434 | orchestrator | Thursday 09 April 2026 04:10:49 +0000 (0:00:01.631) 0:00:18.208 ******** 2026-04-09 04:10:50.844443 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-09 04:10:50.844453 | orchestrator |  "msg": [ 2026-04-09 04:10:50.844465 | orchestrator |  "Validator run completed.", 2026-04-09 04:10:50.844475 | orchestrator |  "You can find the report file here:", 2026-04-09 04:10:50.844485 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-04-09T04:10:32+00:00-report.json", 2026-04-09 04:10:50.844495 | orchestrator |  "on the following host:", 2026-04-09 04:10:50.844505 | orchestrator |  "testbed-manager" 2026-04-09 04:10:50.844515 | orchestrator |  ] 2026-04-09 04:10:50.844525 | orchestrator | } 2026-04-09 04:10:50.844535 | orchestrator | 2026-04-09 04:10:50.844545 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 04:10:50.844556 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-09 04:10:50.844567 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 04:10:50.844577 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 04:10:50.844587 | orchestrator | 2026-04-09 04:10:50.844626 | orchestrator | 2026-04-09 04:10:50.844636 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 04:10:50.844646 | orchestrator | Thursday 09 April 2026 04:10:50 +0000 (0:00:00.891) 0:00:19.099 ******** 2026-04-09 04:10:50.844655 | orchestrator | =============================================================================== 2026-04-09 04:10:50.844665 | orchestrator | Aggregate test results step one ----------------------------------------- 1.82s 2026-04-09 04:10:50.844674 | orchestrator | Write report file ------------------------------------------------------- 1.63s 2026-04-09 04:10:50.844684 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.58s 2026-04-09 04:10:50.844693 | orchestrator | Gather status data ------------------------------------------------------ 1.32s 2026-04-09 04:10:50.844703 | orchestrator | Create report output directory ------------------------------------------ 1.04s 2026-04-09 04:10:50.844712 | orchestrator | Get container info ------------------------------------------------------ 1.02s 2026-04-09 04:10:50.844722 | orchestrator | Get timestamp for report file ------------------------------------------- 0.91s 2026-04-09 04:10:50.844732 | orchestrator | Print report file information ------------------------------------------- 0.89s 2026-04-09 04:10:50.844742 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.54s 2026-04-09 04:10:50.844751 | orchestrator | Set test result to passed if container is existing ---------------------- 0.54s 2026-04-09 04:10:50.844761 | orchestrator | Set quorum test data ---------------------------------------------------- 0.53s 2026-04-09 04:10:50.844770 | orchestrator | Set health test data ---------------------------------------------------- 0.38s 2026-04-09 04:10:50.844815 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.34s 2026-04-09 04:10:50.844826 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.33s 2026-04-09 04:10:50.844836 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2026-04-09 04:10:50.844845 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.32s 2026-04-09 04:10:50.844855 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2026-04-09 04:10:50.844864 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.31s 2026-04-09 04:10:50.844874 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2026-04-09 04:10:50.844884 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.30s 2026-04-09 04:10:51.195379 | orchestrator | + osism validate ceph-mgrs 2026-04-09 04:11:23.283436 | orchestrator | 2026-04-09 04:11:23.283541 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-04-09 04:11:23.283555 | orchestrator | 2026-04-09 04:11:23.283566 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-09 04:11:23.283576 | orchestrator | Thursday 09 April 2026 04:11:08 +0000 (0:00:00.665) 0:00:00.665 ******** 2026-04-09 04:11:23.283586 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 04:11:23.283595 | orchestrator | 2026-04-09 04:11:23.283604 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-09 04:11:23.283612 | orchestrator | Thursday 09 April 2026 04:11:09 +0000 (0:00:00.897) 0:00:01.563 ******** 2026-04-09 04:11:23.283639 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 04:11:23.283648 | orchestrator | 2026-04-09 04:11:23.283657 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-09 04:11:23.283666 | orchestrator | Thursday 09 April 2026 04:11:10 +0000 (0:00:01.017) 0:00:02.580 ******** 2026-04-09 04:11:23.283675 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:11:23.283685 | orchestrator | 2026-04-09 04:11:23.283694 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-09 04:11:23.283756 | orchestrator | Thursday 09 April 2026 04:11:10 +0000 (0:00:00.140) 0:00:02.721 ******** 2026-04-09 04:11:23.283766 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:11:23.283774 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:11:23.283802 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:11:23.283811 | orchestrator | 2026-04-09 04:11:23.283819 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-09 04:11:23.283828 | orchestrator | Thursday 09 April 2026 04:11:10 +0000 (0:00:00.289) 0:00:03.011 ******** 2026-04-09 04:11:23.283837 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:11:23.283846 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:11:23.283854 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:11:23.283863 | orchestrator | 2026-04-09 04:11:23.283871 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-09 04:11:23.283880 | orchestrator | Thursday 09 April 2026 04:11:11 +0000 (0:00:01.028) 0:00:04.039 ******** 2026-04-09 04:11:23.283889 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:11:23.283898 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:11:23.283906 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:11:23.283915 | orchestrator | 2026-04-09 04:11:23.283923 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-09 04:11:23.283932 | orchestrator | Thursday 09 April 2026 04:11:11 +0000 (0:00:00.333) 0:00:04.373 ******** 2026-04-09 04:11:23.283940 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:11:23.283949 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:11:23.283958 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:11:23.283966 | orchestrator | 2026-04-09 04:11:23.283975 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 04:11:23.283984 | orchestrator | Thursday 09 April 2026 04:11:12 +0000 (0:00:00.555) 0:00:04.928 ******** 2026-04-09 04:11:23.283994 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:11:23.284004 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:11:23.284014 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:11:23.284023 | orchestrator | 2026-04-09 04:11:23.284033 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-04-09 04:11:23.284043 | orchestrator | Thursday 09 April 2026 04:11:12 +0000 (0:00:00.318) 0:00:05.247 ******** 2026-04-09 04:11:23.284054 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:11:23.284064 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:11:23.284074 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:11:23.284085 | orchestrator | 2026-04-09 04:11:23.284095 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-04-09 04:11:23.284104 | orchestrator | Thursday 09 April 2026 04:11:13 +0000 (0:00:00.305) 0:00:05.553 ******** 2026-04-09 04:11:23.284112 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:11:23.284121 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:11:23.284129 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:11:23.284138 | orchestrator | 2026-04-09 04:11:23.284146 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-09 04:11:23.284155 | orchestrator | Thursday 09 April 2026 04:11:13 +0000 (0:00:00.533) 0:00:06.086 ******** 2026-04-09 04:11:23.284163 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:11:23.284172 | orchestrator | 2026-04-09 04:11:23.284180 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-09 04:11:23.284189 | orchestrator | Thursday 09 April 2026 04:11:13 +0000 (0:00:00.256) 0:00:06.342 ******** 2026-04-09 04:11:23.284197 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:11:23.284206 | orchestrator | 2026-04-09 04:11:23.284214 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-09 04:11:23.284227 | orchestrator | Thursday 09 April 2026 04:11:14 +0000 (0:00:00.287) 0:00:06.630 ******** 2026-04-09 04:11:23.284236 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:11:23.284244 | orchestrator | 2026-04-09 04:11:23.284253 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 04:11:23.284261 | orchestrator | Thursday 09 April 2026 04:11:14 +0000 (0:00:00.263) 0:00:06.894 ******** 2026-04-09 04:11:23.284270 | orchestrator | 2026-04-09 04:11:23.284281 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 04:11:23.284295 | orchestrator | Thursday 09 April 2026 04:11:14 +0000 (0:00:00.074) 0:00:06.969 ******** 2026-04-09 04:11:23.284318 | orchestrator | 2026-04-09 04:11:23.284337 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 04:11:23.284357 | orchestrator | Thursday 09 April 2026 04:11:14 +0000 (0:00:00.077) 0:00:07.046 ******** 2026-04-09 04:11:23.284371 | orchestrator | 2026-04-09 04:11:23.284385 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-09 04:11:23.284399 | orchestrator | Thursday 09 April 2026 04:11:14 +0000 (0:00:00.083) 0:00:07.130 ******** 2026-04-09 04:11:23.284412 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:11:23.284424 | orchestrator | 2026-04-09 04:11:23.284439 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-09 04:11:23.284453 | orchestrator | Thursday 09 April 2026 04:11:15 +0000 (0:00:00.272) 0:00:07.402 ******** 2026-04-09 04:11:23.284467 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:11:23.284481 | orchestrator | 2026-04-09 04:11:23.284517 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-04-09 04:11:23.284532 | orchestrator | Thursday 09 April 2026 04:11:15 +0000 (0:00:00.254) 0:00:07.657 ******** 2026-04-09 04:11:23.284541 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:11:23.284550 | orchestrator | 2026-04-09 04:11:23.284558 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-04-09 04:11:23.284567 | orchestrator | Thursday 09 April 2026 04:11:15 +0000 (0:00:00.136) 0:00:07.794 ******** 2026-04-09 04:11:23.284576 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:11:23.284584 | orchestrator | 2026-04-09 04:11:23.284593 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-04-09 04:11:23.284602 | orchestrator | Thursday 09 April 2026 04:11:17 +0000 (0:00:02.019) 0:00:09.813 ******** 2026-04-09 04:11:23.284610 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:11:23.284619 | orchestrator | 2026-04-09 04:11:23.284628 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-04-09 04:11:23.284636 | orchestrator | Thursday 09 April 2026 04:11:17 +0000 (0:00:00.457) 0:00:10.271 ******** 2026-04-09 04:11:23.284645 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:11:23.284653 | orchestrator | 2026-04-09 04:11:23.284662 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-04-09 04:11:23.284671 | orchestrator | Thursday 09 April 2026 04:11:18 +0000 (0:00:00.348) 0:00:10.619 ******** 2026-04-09 04:11:23.284679 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:11:23.284688 | orchestrator | 2026-04-09 04:11:23.284696 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-04-09 04:11:23.284731 | orchestrator | Thursday 09 April 2026 04:11:18 +0000 (0:00:00.148) 0:00:10.767 ******** 2026-04-09 04:11:23.284740 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:11:23.284749 | orchestrator | 2026-04-09 04:11:23.284758 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-09 04:11:23.284766 | orchestrator | Thursday 09 April 2026 04:11:18 +0000 (0:00:00.157) 0:00:10.925 ******** 2026-04-09 04:11:23.284775 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 04:11:23.284785 | orchestrator | 2026-04-09 04:11:23.284796 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-09 04:11:23.284816 | orchestrator | Thursday 09 April 2026 04:11:18 +0000 (0:00:00.297) 0:00:11.223 ******** 2026-04-09 04:11:23.284834 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:11:23.284852 | orchestrator | 2026-04-09 04:11:23.284869 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-09 04:11:23.284886 | orchestrator | Thursday 09 April 2026 04:11:19 +0000 (0:00:00.278) 0:00:11.501 ******** 2026-04-09 04:11:23.284901 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 04:11:23.284919 | orchestrator | 2026-04-09 04:11:23.284937 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-09 04:11:23.284956 | orchestrator | Thursday 09 April 2026 04:11:20 +0000 (0:00:01.303) 0:00:12.805 ******** 2026-04-09 04:11:23.284967 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 04:11:23.284989 | orchestrator | 2026-04-09 04:11:23.285000 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-09 04:11:23.285011 | orchestrator | Thursday 09 April 2026 04:11:20 +0000 (0:00:00.287) 0:00:13.093 ******** 2026-04-09 04:11:23.285021 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 04:11:23.285032 | orchestrator | 2026-04-09 04:11:23.285043 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 04:11:23.285053 | orchestrator | Thursday 09 April 2026 04:11:21 +0000 (0:00:00.302) 0:00:13.395 ******** 2026-04-09 04:11:23.285064 | orchestrator | 2026-04-09 04:11:23.285075 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 04:11:23.285085 | orchestrator | Thursday 09 April 2026 04:11:21 +0000 (0:00:00.072) 0:00:13.467 ******** 2026-04-09 04:11:23.285096 | orchestrator | 2026-04-09 04:11:23.285107 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 04:11:23.285117 | orchestrator | Thursday 09 April 2026 04:11:21 +0000 (0:00:00.071) 0:00:13.539 ******** 2026-04-09 04:11:23.285128 | orchestrator | 2026-04-09 04:11:23.285139 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-09 04:11:23.285149 | orchestrator | Thursday 09 April 2026 04:11:21 +0000 (0:00:00.290) 0:00:13.830 ******** 2026-04-09 04:11:23.285160 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 04:11:23.285170 | orchestrator | 2026-04-09 04:11:23.285188 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-09 04:11:23.285199 | orchestrator | Thursday 09 April 2026 04:11:22 +0000 (0:00:01.355) 0:00:15.185 ******** 2026-04-09 04:11:23.285210 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-09 04:11:23.285221 | orchestrator |  "msg": [ 2026-04-09 04:11:23.285232 | orchestrator |  "Validator run completed.", 2026-04-09 04:11:23.285243 | orchestrator |  "You can find the report file here:", 2026-04-09 04:11:23.285254 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-04-09T04:11:09+00:00-report.json", 2026-04-09 04:11:23.285266 | orchestrator |  "on the following host:", 2026-04-09 04:11:23.285276 | orchestrator |  "testbed-manager" 2026-04-09 04:11:23.285287 | orchestrator |  ] 2026-04-09 04:11:23.285298 | orchestrator | } 2026-04-09 04:11:23.285309 | orchestrator | 2026-04-09 04:11:23.285320 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 04:11:23.285332 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-09 04:11:23.285345 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 04:11:23.285365 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 04:11:23.654764 | orchestrator | 2026-04-09 04:11:23.654864 | orchestrator | 2026-04-09 04:11:23.654887 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 04:11:23.654927 | orchestrator | Thursday 09 April 2026 04:11:23 +0000 (0:00:00.461) 0:00:15.647 ******** 2026-04-09 04:11:23.654955 | orchestrator | =============================================================================== 2026-04-09 04:11:23.654972 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.02s 2026-04-09 04:11:23.654990 | orchestrator | Write report file ------------------------------------------------------- 1.36s 2026-04-09 04:11:23.655007 | orchestrator | Aggregate test results step one ----------------------------------------- 1.30s 2026-04-09 04:11:23.655025 | orchestrator | Get container info ------------------------------------------------------ 1.03s 2026-04-09 04:11:23.655044 | orchestrator | Create report output directory ------------------------------------------ 1.02s 2026-04-09 04:11:23.655059 | orchestrator | Get timestamp for report file ------------------------------------------- 0.90s 2026-04-09 04:11:23.655111 | orchestrator | Set test result to passed if container is existing ---------------------- 0.56s 2026-04-09 04:11:23.655130 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.53s 2026-04-09 04:11:23.655146 | orchestrator | Print report file information ------------------------------------------- 0.46s 2026-04-09 04:11:23.655164 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.46s 2026-04-09 04:11:23.655183 | orchestrator | Flush handlers ---------------------------------------------------------- 0.43s 2026-04-09 04:11:23.655201 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.35s 2026-04-09 04:11:23.655219 | orchestrator | Set test result to failed if container is missing ----------------------- 0.33s 2026-04-09 04:11:23.655237 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2026-04-09 04:11:23.655257 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.31s 2026-04-09 04:11:23.655276 | orchestrator | Aggregate test results step three --------------------------------------- 0.30s 2026-04-09 04:11:23.655296 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.30s 2026-04-09 04:11:23.655310 | orchestrator | Prepare test data for container existance test -------------------------- 0.29s 2026-04-09 04:11:23.655323 | orchestrator | Aggregate test results step two ----------------------------------------- 0.29s 2026-04-09 04:11:23.655336 | orchestrator | Aggregate test results step two ----------------------------------------- 0.29s 2026-04-09 04:11:23.984755 | orchestrator | + osism validate ceph-osds 2026-04-09 04:11:45.616536 | orchestrator | 2026-04-09 04:11:45.616647 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-04-09 04:11:45.616724 | orchestrator | 2026-04-09 04:11:45.616737 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-09 04:11:45.616747 | orchestrator | Thursday 09 April 2026 04:11:40 +0000 (0:00:00.438) 0:00:00.438 ******** 2026-04-09 04:11:45.616757 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 04:11:45.616767 | orchestrator | 2026-04-09 04:11:45.616776 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 04:11:45.616785 | orchestrator | Thursday 09 April 2026 04:11:41 +0000 (0:00:00.902) 0:00:01.341 ******** 2026-04-09 04:11:45.616794 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 04:11:45.616803 | orchestrator | 2026-04-09 04:11:45.616813 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-09 04:11:45.616822 | orchestrator | Thursday 09 April 2026 04:11:42 +0000 (0:00:00.604) 0:00:01.946 ******** 2026-04-09 04:11:45.616831 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 04:11:45.616841 | orchestrator | 2026-04-09 04:11:45.616850 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-09 04:11:45.616859 | orchestrator | Thursday 09 April 2026 04:11:43 +0000 (0:00:00.744) 0:00:02.690 ******** 2026-04-09 04:11:45.616868 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:11:45.616878 | orchestrator | 2026-04-09 04:11:45.616888 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-09 04:11:45.616897 | orchestrator | Thursday 09 April 2026 04:11:43 +0000 (0:00:00.155) 0:00:02.845 ******** 2026-04-09 04:11:45.616906 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:11:45.616915 | orchestrator | 2026-04-09 04:11:45.616925 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-09 04:11:45.616934 | orchestrator | Thursday 09 April 2026 04:11:43 +0000 (0:00:00.138) 0:00:02.984 ******** 2026-04-09 04:11:45.616943 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:11:45.616952 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:11:45.616961 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:11:45.616970 | orchestrator | 2026-04-09 04:11:45.616979 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-09 04:11:45.616988 | orchestrator | Thursday 09 April 2026 04:11:43 +0000 (0:00:00.333) 0:00:03.318 ******** 2026-04-09 04:11:45.617021 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:11:45.617030 | orchestrator | 2026-04-09 04:11:45.617039 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-09 04:11:45.617048 | orchestrator | Thursday 09 April 2026 04:11:43 +0000 (0:00:00.165) 0:00:03.484 ******** 2026-04-09 04:11:45.617057 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:11:45.617066 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:11:45.617075 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:11:45.617084 | orchestrator | 2026-04-09 04:11:45.617094 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-04-09 04:11:45.617104 | orchestrator | Thursday 09 April 2026 04:11:44 +0000 (0:00:00.378) 0:00:03.862 ******** 2026-04-09 04:11:45.617113 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:11:45.617122 | orchestrator | 2026-04-09 04:11:45.617131 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 04:11:45.617140 | orchestrator | Thursday 09 April 2026 04:11:45 +0000 (0:00:00.833) 0:00:04.695 ******** 2026-04-09 04:11:45.617149 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:11:45.617159 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:11:45.617168 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:11:45.617178 | orchestrator | 2026-04-09 04:11:45.617187 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-04-09 04:11:45.617196 | orchestrator | Thursday 09 April 2026 04:11:45 +0000 (0:00:00.341) 0:00:05.037 ******** 2026-04-09 04:11:45.617207 | orchestrator | skipping: [testbed-node-3] => (item={'id': '762577ea2b41b28b49c1a01610815917b53ed0448f4b1970a907109cf7ae028c', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-04-09 04:11:45.617219 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1eaa32c560c08fa69340d97329b5e19e64c8b2816f0d07155f5cb89148328fe0', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-09 04:11:45.617230 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c84d329aa67fe8d0e773250414b517dbab6ea0663a1203f476a758bc4bfaff42', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-04-09 04:11:45.617239 | orchestrator | skipping: [testbed-node-3] => (item={'id': '81d031fa31f4880e8833f0fed4b7e4dcd6aa038608b40adb7938b0bc9c5decdc', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-04-09 04:11:45.617248 | orchestrator | skipping: [testbed-node-3] => (item={'id': '15c565b9b9b30dd151caa9419cfb591f25885f50af54f39609b0243b33a12b82', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-04-09 04:11:45.617325 | orchestrator | skipping: [testbed-node-3] => (item={'id': '460f36b81899669b19326c1f398bb4434ea2f2b6579f5f985ec62a4e693f79b9', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-04-09 04:11:45.617340 | orchestrator | skipping: [testbed-node-3] => (item={'id': '019a6cd22fc2b4aa86306b1ed66f170875976c9b9a3e2879b85ab1864ba214b2', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-04-09 04:11:45.617351 | orchestrator | skipping: [testbed-node-3] => (item={'id': '69b976078be8bb6202e185c0e366e1e3163676f88e939aaac3352684fe690f66', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 50 minutes (healthy)'})  2026-04-09 04:11:45.617371 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8995ca1aaac5e114603179a4afb3f644ea862cd7555a38378a7eb26cfd74edb3', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-09 04:11:45.617386 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd1a51519bc06147db1230866003d899ecbc2f09df334b52e02198e7cb87307fe', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-04-09 04:11:45.617396 | orchestrator | skipping: [testbed-node-3] => (item={'id': '970b51607bf9b6cb6eb324040b6650e56161337e34a2b16f9511dd95d62a3d61', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-04-09 04:11:45.617407 | orchestrator | ok: [testbed-node-3] => (item={'id': '859218b2589d98cf01c625021bfd6b65c0d8bf772c369b4d4d42edd710996c6a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-09 04:11:45.617417 | orchestrator | ok: [testbed-node-3] => (item={'id': '482eb42d17d144fcdd78324b6c385f22ada1d4a7ccc5e7604a01bc99548bd5ad', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-09 04:11:45.617426 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd7f21dee1ed723fd6c6052ab846b239e80d2798fc29a9a4fc8acc273752af5ef', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-04-09 04:11:45.617436 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1b6928d422b7752f11517805a1747455fed25de956d0368feaf883d992adc271', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-09 04:11:45.617445 | orchestrator | skipping: [testbed-node-3] => (item={'id': '40fc1756b431f30ae245a8f272468ce55b9a189bbbcedddd49e4da7c46937f3a', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-09 04:11:45.617453 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3b1ef3de6f71b162f844dbd535e5ffcde4029dd0934568d98b3b1156b55ef2ac', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-09 04:11:45.617463 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5fd817ef7bb20de2f5deed7b9f9e1f720c4fd1e6278f9775a649afc4a6e942be', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-09 04:11:45.617472 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ca73115c65f1352e51b954b3cffc131d6f2a233a9989b9ed77bdf50c65977781', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-09 04:11:45.617481 | orchestrator | skipping: [testbed-node-4] => (item={'id': '553480a58fa229c04d30787d1a5c04021454364a283a0b745b15a72cdd28795d', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-04-09 04:11:45.617501 | orchestrator | skipping: [testbed-node-4] => (item={'id': '864b4bedbbcfc4e790dad1e38265e953e26a13daa9585d3604103dff524acd2e', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-09 04:11:45.787924 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8272c9d3d5196ee99a75bb8210e396d89a1f067f26742cfe95151d181a1c28a9', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-04-09 04:11:45.788049 | orchestrator | skipping: [testbed-node-4] => (item={'id': '89e64135fea443c97661681c40b41b9efe87674de7d91adc3a703aa579ae975d', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-04-09 04:11:45.788064 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fdb26f00945249ba3b242554f5d3c1979653c2b31a0adac06d3df31a4a90cb34', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-04-09 04:11:45.788093 | orchestrator | skipping: [testbed-node-4] => (item={'id': '67be077e3e6fcf58d3c3e2572402e8ec6666dfd11ec6f7ed721d2c405ac27e27', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-04-09 04:11:45.788105 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3d75a3b819a45ee68330ae097944247a618b2877a8d6de73744a6e592c3618a8', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-04-09 04:11:45.788116 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a2060950f35b0f9e145508a02abf1b5a87594b128707f5feec005d9d464c9c60', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 50 minutes (healthy)'})  2026-04-09 04:11:45.788139 | orchestrator | skipping: [testbed-node-4] => (item={'id': '182a4ef697da3573e9407fa61892615d86412a63211610e6075b0b654ae97b37', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-09 04:11:45.788152 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b937d6617a26e128e3b84abc0c3f4d45466245ccdac8104b8725cf638e4bc75a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-04-09 04:11:45.788164 | orchestrator | skipping: [testbed-node-4] => (item={'id': '73d78aeee2fb0f2f43d5984362f0bdeeafbe793b8562ca36c76c327f1f0302af', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-04-09 04:11:45.788178 | orchestrator | ok: [testbed-node-4] => (item={'id': '84fc6887b4391881e49063b7e6ac18a202e8f22eb920f4e5b93392459a4225e7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-09 04:11:45.788190 | orchestrator | ok: [testbed-node-4] => (item={'id': '5ebfb69ca15e88d363a955c475f3ee71c7b34ec1635fc4b8ef27f8c8dd9eec4b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-09 04:11:45.788201 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8d0e8002f9bdd63c9ea1ba2d6abd5ba731d0d5f3ab844f4fe9c9b19a77148829', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-04-09 04:11:45.788212 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c49f13e1e2579481099f09f9a89b21819f4cf3ca2dab2e5b43746fc67cdc0202', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-09 04:11:45.788224 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0242b9485ce026967965a3d0946867c0209f5a0558414fb29137a64901db73e7', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-09 04:11:45.788259 | orchestrator | skipping: [testbed-node-4] => (item={'id': '09eaad4a00a0c98189554b420699a867b4da830c1413bd44cf9a75d5be0223f9', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-09 04:11:45.788272 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e22801cbff24ad625e88d32ad49b2f94265f424ebe50299921038726f974d97a', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-09 04:11:45.788283 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'afbaabf2bf4bb80c34bf30a5dc83b8f67dd3f681c652816a95a660c721835fda', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-09 04:11:45.788294 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5812b04e7608c4dab5bf8933a6d6758580a2dbb328626d4e846858854b1f0c41', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-04-09 04:11:45.788310 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1ba13189e69b24739de9de1ff87197c183bcce033559b113e5526f940dc7e5cc', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-09 04:11:45.788322 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5f683eefc6151637805b73f63115200c1f10ed8ffdbcfa52432fb4412c1e9ca2', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-04-09 04:11:45.788333 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1be33e2e25034d4b4c56b99e9c96b172836db574ff813b3ab67105f76189a9e2', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-04-09 04:11:45.788344 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd4884dbae9b28bbc4b706b7bb11c10a00c5dff9742a12e254a7639b935cc58da', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-04-09 04:11:45.788355 | orchestrator | skipping: [testbed-node-5] => (item={'id': '04c73f594de1d140f48b312a99e0c156cfe3980043fa75220279b99688d1d458', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-04-09 04:11:45.788366 | orchestrator | skipping: [testbed-node-5] => (item={'id': '01a76d7aa603d37dc3dd4b4e3dcca65761cf3623e89422d381e8eff4a46a25b3', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-04-09 04:11:45.788377 | orchestrator | skipping: [testbed-node-5] => (item={'id': '92c9688f2c4d9987d980da86350ae1225895240256c0c38b3984a39c48a935c1', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 50 minutes (healthy)'})  2026-04-09 04:11:45.788388 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4aa4286797044365256d60b234038aaf83027a22ebaee770e49f810d3ef41225', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-09 04:11:45.788400 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5edf71e4f5a5f73dcef50805e3d49ba9b743b21848643524ffde0510663ea3dc', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-04-09 04:11:45.788411 | orchestrator | skipping: [testbed-node-5] => (item={'id': '03655cca92e7d6402ed48bcdd50d313a0ad0ea321fd0d651d8eafc15276adc63', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-04-09 04:11:45.788429 | orchestrator | ok: [testbed-node-5] => (item={'id': '115aed6c13c3beed036c27aa00c0f7b393dbaa2904cbe80c736cf5de5f6dae54', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-09 04:11:45.788448 | orchestrator | ok: [testbed-node-5] => (item={'id': 'da55b18a0a547cee1c5c431e3b600082443d0fbdac7c79bef53eea72405864d4', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-09 04:11:57.686228 | orchestrator | skipping: [testbed-node-5] => (item={'id': '318d1c0c2aa3d6a613f75f0f039948623635e91b2540557d21026558221ae011', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-04-09 04:11:57.686372 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b8a728a28432dc29444e42b6b14fe65ee2e7f1da670bc6fa7959bc28905f560f', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-09 04:11:57.686398 | orchestrator | skipping: [testbed-node-5] => (item={'id': '01e6017d6c06c1211f2486b5dd2388a94d84ab3c0a4616eea425023c598a4f7a', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-09 04:11:57.686419 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9b0789579365fa8098327a7bc3198d344342472f10b61083f27368c3fb75ae4b', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-09 04:11:57.686438 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5ec1089ea3f59f1fdad2a764caa4c27cb4228ce97a3b5bb9c8b34e161dc9f61f', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-09 04:11:57.686456 | orchestrator | skipping: [testbed-node-5] => (item={'id': '851ce9862d8a2fce593923fb7515f0a087145dc5822b1061db7b3d7b70922455', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-09 04:11:57.686474 | orchestrator | 2026-04-09 04:11:57.686487 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-04-09 04:11:57.686498 | orchestrator | Thursday 09 April 2026 04:11:45 +0000 (0:00:00.520) 0:00:05.557 ******** 2026-04-09 04:11:57.686508 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:11:57.686519 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:11:57.686528 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:11:57.686538 | orchestrator | 2026-04-09 04:11:57.686548 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-04-09 04:11:57.686557 | orchestrator | Thursday 09 April 2026 04:11:46 +0000 (0:00:00.315) 0:00:05.873 ******** 2026-04-09 04:11:57.686567 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:11:57.686578 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:11:57.686587 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:11:57.686597 | orchestrator | 2026-04-09 04:11:57.686606 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-04-09 04:11:57.686616 | orchestrator | Thursday 09 April 2026 04:11:46 +0000 (0:00:00.489) 0:00:06.363 ******** 2026-04-09 04:11:57.686626 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:11:57.686696 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:11:57.686709 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:11:57.686721 | orchestrator | 2026-04-09 04:11:57.686732 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 04:11:57.686768 | orchestrator | Thursday 09 April 2026 04:11:47 +0000 (0:00:00.356) 0:00:06.719 ******** 2026-04-09 04:11:57.686780 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:11:57.686791 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:11:57.686802 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:11:57.686813 | orchestrator | 2026-04-09 04:11:57.686825 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-04-09 04:11:57.686837 | orchestrator | Thursday 09 April 2026 04:11:47 +0000 (0:00:00.308) 0:00:07.028 ******** 2026-04-09 04:11:57.686866 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-04-09 04:11:57.686886 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-04-09 04:11:57.686906 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:11:57.686924 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-04-09 04:11:57.686939 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-04-09 04:11:57.686951 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:11:57.686962 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-04-09 04:11:57.686973 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-04-09 04:11:57.686985 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:11:57.686995 | orchestrator | 2026-04-09 04:11:57.687007 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-04-09 04:11:57.687018 | orchestrator | Thursday 09 April 2026 04:11:47 +0000 (0:00:00.329) 0:00:07.357 ******** 2026-04-09 04:11:57.687029 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:11:57.687038 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:11:57.687048 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:11:57.687057 | orchestrator | 2026-04-09 04:11:57.687067 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-09 04:11:57.687077 | orchestrator | Thursday 09 April 2026 04:11:48 +0000 (0:00:00.512) 0:00:07.870 ******** 2026-04-09 04:11:57.687086 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:11:57.687114 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:11:57.687125 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:11:57.687134 | orchestrator | 2026-04-09 04:11:57.687144 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-09 04:11:57.687154 | orchestrator | Thursday 09 April 2026 04:11:48 +0000 (0:00:00.318) 0:00:08.188 ******** 2026-04-09 04:11:57.687163 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:11:57.687173 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:11:57.687182 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:11:57.687192 | orchestrator | 2026-04-09 04:11:57.687201 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-04-09 04:11:57.687211 | orchestrator | Thursday 09 April 2026 04:11:48 +0000 (0:00:00.325) 0:00:08.513 ******** 2026-04-09 04:11:57.687220 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:11:57.687229 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:11:57.687239 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:11:57.687248 | orchestrator | 2026-04-09 04:11:57.687258 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-09 04:11:57.687267 | orchestrator | Thursday 09 April 2026 04:11:49 +0000 (0:00:00.328) 0:00:08.842 ******** 2026-04-09 04:11:57.687277 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:11:57.687286 | orchestrator | 2026-04-09 04:11:57.687296 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-09 04:11:57.687310 | orchestrator | Thursday 09 April 2026 04:11:49 +0000 (0:00:00.703) 0:00:09.545 ******** 2026-04-09 04:11:57.687320 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:11:57.687329 | orchestrator | 2026-04-09 04:11:57.687339 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-09 04:11:57.687356 | orchestrator | Thursday 09 April 2026 04:11:50 +0000 (0:00:00.264) 0:00:09.810 ******** 2026-04-09 04:11:57.687366 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:11:57.687375 | orchestrator | 2026-04-09 04:11:57.687385 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 04:11:57.687394 | orchestrator | Thursday 09 April 2026 04:11:50 +0000 (0:00:00.275) 0:00:10.086 ******** 2026-04-09 04:11:57.687404 | orchestrator | 2026-04-09 04:11:57.687413 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 04:11:57.687423 | orchestrator | Thursday 09 April 2026 04:11:50 +0000 (0:00:00.085) 0:00:10.171 ******** 2026-04-09 04:11:57.687432 | orchestrator | 2026-04-09 04:11:57.687442 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 04:11:57.687451 | orchestrator | Thursday 09 April 2026 04:11:50 +0000 (0:00:00.072) 0:00:10.244 ******** 2026-04-09 04:11:57.687461 | orchestrator | 2026-04-09 04:11:57.687470 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-09 04:11:57.687480 | orchestrator | Thursday 09 April 2026 04:11:50 +0000 (0:00:00.073) 0:00:10.318 ******** 2026-04-09 04:11:57.687489 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:11:57.687498 | orchestrator | 2026-04-09 04:11:57.687508 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-04-09 04:11:57.687517 | orchestrator | Thursday 09 April 2026 04:11:50 +0000 (0:00:00.280) 0:00:10.599 ******** 2026-04-09 04:11:57.687527 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:11:57.687536 | orchestrator | 2026-04-09 04:11:57.687546 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 04:11:57.687555 | orchestrator | Thursday 09 April 2026 04:11:51 +0000 (0:00:00.259) 0:00:10.859 ******** 2026-04-09 04:11:57.687565 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:11:57.687574 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:11:57.687584 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:11:57.687593 | orchestrator | 2026-04-09 04:11:57.687603 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-04-09 04:11:57.687612 | orchestrator | Thursday 09 April 2026 04:11:51 +0000 (0:00:00.353) 0:00:11.212 ******** 2026-04-09 04:11:57.687621 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:11:57.687631 | orchestrator | 2026-04-09 04:11:57.687660 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-04-09 04:11:57.687670 | orchestrator | Thursday 09 April 2026 04:11:52 +0000 (0:00:00.735) 0:00:11.948 ******** 2026-04-09 04:11:57.687679 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 04:11:57.687689 | orchestrator | 2026-04-09 04:11:57.687698 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-04-09 04:11:57.687708 | orchestrator | Thursday 09 April 2026 04:11:53 +0000 (0:00:01.604) 0:00:13.552 ******** 2026-04-09 04:11:57.687717 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:11:57.687727 | orchestrator | 2026-04-09 04:11:57.687736 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-04-09 04:11:57.687746 | orchestrator | Thursday 09 April 2026 04:11:54 +0000 (0:00:00.138) 0:00:13.691 ******** 2026-04-09 04:11:57.687755 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:11:57.687764 | orchestrator | 2026-04-09 04:11:57.687774 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-04-09 04:11:57.687783 | orchestrator | Thursday 09 April 2026 04:11:54 +0000 (0:00:00.332) 0:00:14.024 ******** 2026-04-09 04:11:57.687793 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:11:57.687802 | orchestrator | 2026-04-09 04:11:57.687811 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-04-09 04:11:57.687821 | orchestrator | Thursday 09 April 2026 04:11:54 +0000 (0:00:00.134) 0:00:14.158 ******** 2026-04-09 04:11:57.687830 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:11:57.687839 | orchestrator | 2026-04-09 04:11:57.687849 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 04:11:57.687858 | orchestrator | Thursday 09 April 2026 04:11:54 +0000 (0:00:00.139) 0:00:14.298 ******** 2026-04-09 04:11:57.687875 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:11:57.687884 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:11:57.687894 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:11:57.687903 | orchestrator | 2026-04-09 04:11:57.687913 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-04-09 04:11:57.687922 | orchestrator | Thursday 09 April 2026 04:11:54 +0000 (0:00:00.356) 0:00:14.654 ******** 2026-04-09 04:11:57.687932 | orchestrator | changed: [testbed-node-3] 2026-04-09 04:11:57.687941 | orchestrator | changed: [testbed-node-4] 2026-04-09 04:11:57.687951 | orchestrator | changed: [testbed-node-5] 2026-04-09 04:12:08.432946 | orchestrator | 2026-04-09 04:12:08.433067 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-04-09 04:12:08.433087 | orchestrator | Thursday 09 April 2026 04:11:57 +0000 (0:00:02.701) 0:00:17.355 ******** 2026-04-09 04:12:08.433101 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:12:08.433113 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:12:08.433124 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:12:08.433136 | orchestrator | 2026-04-09 04:12:08.433148 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-04-09 04:12:08.433159 | orchestrator | Thursday 09 April 2026 04:11:58 +0000 (0:00:00.372) 0:00:17.728 ******** 2026-04-09 04:12:08.433171 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:12:08.433182 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:12:08.433193 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:12:08.433204 | orchestrator | 2026-04-09 04:12:08.433216 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-04-09 04:12:08.433227 | orchestrator | Thursday 09 April 2026 04:11:58 +0000 (0:00:00.492) 0:00:18.221 ******** 2026-04-09 04:12:08.433238 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:12:08.433251 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:12:08.433262 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:12:08.433273 | orchestrator | 2026-04-09 04:12:08.433284 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-04-09 04:12:08.433313 | orchestrator | Thursday 09 April 2026 04:11:58 +0000 (0:00:00.328) 0:00:18.549 ******** 2026-04-09 04:12:08.433324 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:12:08.433336 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:12:08.433347 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:12:08.433358 | orchestrator | 2026-04-09 04:12:08.433369 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-04-09 04:12:08.433380 | orchestrator | Thursday 09 April 2026 04:11:59 +0000 (0:00:00.583) 0:00:19.132 ******** 2026-04-09 04:12:08.433391 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:12:08.433402 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:12:08.433413 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:12:08.433424 | orchestrator | 2026-04-09 04:12:08.433435 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-04-09 04:12:08.433447 | orchestrator | Thursday 09 April 2026 04:11:59 +0000 (0:00:00.316) 0:00:19.449 ******** 2026-04-09 04:12:08.433458 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:12:08.433469 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:12:08.433483 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:12:08.433495 | orchestrator | 2026-04-09 04:12:08.433509 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 04:12:08.433522 | orchestrator | Thursday 09 April 2026 04:12:00 +0000 (0:00:00.317) 0:00:19.767 ******** 2026-04-09 04:12:08.433535 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:12:08.433548 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:12:08.433561 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:12:08.433574 | orchestrator | 2026-04-09 04:12:08.433587 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-04-09 04:12:08.433600 | orchestrator | Thursday 09 April 2026 04:12:00 +0000 (0:00:00.521) 0:00:20.288 ******** 2026-04-09 04:12:08.433612 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:12:08.433653 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:12:08.433687 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:12:08.433700 | orchestrator | 2026-04-09 04:12:08.433714 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-04-09 04:12:08.433728 | orchestrator | Thursday 09 April 2026 04:12:01 +0000 (0:00:00.801) 0:00:21.089 ******** 2026-04-09 04:12:08.433741 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:12:08.433754 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:12:08.433767 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:12:08.433780 | orchestrator | 2026-04-09 04:12:08.433792 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-04-09 04:12:08.433805 | orchestrator | Thursday 09 April 2026 04:12:01 +0000 (0:00:00.313) 0:00:21.403 ******** 2026-04-09 04:12:08.433819 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:12:08.433832 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:12:08.433844 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:12:08.433855 | orchestrator | 2026-04-09 04:12:08.433867 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-04-09 04:12:08.433878 | orchestrator | Thursday 09 April 2026 04:12:02 +0000 (0:00:00.334) 0:00:21.737 ******** 2026-04-09 04:12:08.433888 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:12:08.433899 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:12:08.433910 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:12:08.433921 | orchestrator | 2026-04-09 04:12:08.433932 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-09 04:12:08.433943 | orchestrator | Thursday 09 April 2026 04:12:02 +0000 (0:00:00.547) 0:00:22.284 ******** 2026-04-09 04:12:08.433954 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 04:12:08.433966 | orchestrator | 2026-04-09 04:12:08.433977 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-09 04:12:08.433988 | orchestrator | Thursday 09 April 2026 04:12:02 +0000 (0:00:00.295) 0:00:22.580 ******** 2026-04-09 04:12:08.433999 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:12:08.434010 | orchestrator | 2026-04-09 04:12:08.434080 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-09 04:12:08.434092 | orchestrator | Thursday 09 April 2026 04:12:03 +0000 (0:00:00.270) 0:00:22.851 ******** 2026-04-09 04:12:08.434103 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 04:12:08.434114 | orchestrator | 2026-04-09 04:12:08.434125 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-09 04:12:08.434136 | orchestrator | Thursday 09 April 2026 04:12:04 +0000 (0:00:01.773) 0:00:24.625 ******** 2026-04-09 04:12:08.434147 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 04:12:08.434158 | orchestrator | 2026-04-09 04:12:08.434169 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-09 04:12:08.434180 | orchestrator | Thursday 09 April 2026 04:12:05 +0000 (0:00:00.274) 0:00:24.899 ******** 2026-04-09 04:12:08.434191 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 04:12:08.434202 | orchestrator | 2026-04-09 04:12:08.434233 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 04:12:08.434245 | orchestrator | Thursday 09 April 2026 04:12:05 +0000 (0:00:00.300) 0:00:25.199 ******** 2026-04-09 04:12:08.434256 | orchestrator | 2026-04-09 04:12:08.434267 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 04:12:08.434278 | orchestrator | Thursday 09 April 2026 04:12:05 +0000 (0:00:00.073) 0:00:25.273 ******** 2026-04-09 04:12:08.434289 | orchestrator | 2026-04-09 04:12:08.434300 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 04:12:08.434310 | orchestrator | Thursday 09 April 2026 04:12:05 +0000 (0:00:00.091) 0:00:25.364 ******** 2026-04-09 04:12:08.434321 | orchestrator | 2026-04-09 04:12:08.434332 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-09 04:12:08.434343 | orchestrator | Thursday 09 April 2026 04:12:05 +0000 (0:00:00.076) 0:00:25.441 ******** 2026-04-09 04:12:08.434363 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 04:12:08.434374 | orchestrator | 2026-04-09 04:12:08.434385 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-09 04:12:08.434396 | orchestrator | Thursday 09 April 2026 04:12:07 +0000 (0:00:01.649) 0:00:27.090 ******** 2026-04-09 04:12:08.434413 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-04-09 04:12:08.434425 | orchestrator |  "msg": [ 2026-04-09 04:12:08.434436 | orchestrator |  "Validator run completed.", 2026-04-09 04:12:08.434448 | orchestrator |  "You can find the report file here:", 2026-04-09 04:12:08.434459 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-04-09T04:11:41+00:00-report.json", 2026-04-09 04:12:08.434470 | orchestrator |  "on the following host:", 2026-04-09 04:12:08.434481 | orchestrator |  "testbed-manager" 2026-04-09 04:12:08.434492 | orchestrator |  ] 2026-04-09 04:12:08.434504 | orchestrator | } 2026-04-09 04:12:08.434515 | orchestrator | 2026-04-09 04:12:08.434526 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 04:12:08.434538 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 04:12:08.434551 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-09 04:12:08.434562 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-09 04:12:08.434573 | orchestrator | 2026-04-09 04:12:08.434584 | orchestrator | 2026-04-09 04:12:08.434595 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 04:12:08.434606 | orchestrator | Thursday 09 April 2026 04:12:08 +0000 (0:00:00.633) 0:00:27.724 ******** 2026-04-09 04:12:08.434647 | orchestrator | =============================================================================== 2026-04-09 04:12:08.434658 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.70s 2026-04-09 04:12:08.434669 | orchestrator | Aggregate test results step one ----------------------------------------- 1.77s 2026-04-09 04:12:08.434680 | orchestrator | Write report file ------------------------------------------------------- 1.65s 2026-04-09 04:12:08.434691 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.60s 2026-04-09 04:12:08.434702 | orchestrator | Get timestamp for report file ------------------------------------------- 0.90s 2026-04-09 04:12:08.434713 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.83s 2026-04-09 04:12:08.434723 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.80s 2026-04-09 04:12:08.434734 | orchestrator | Create report output directory ------------------------------------------ 0.74s 2026-04-09 04:12:08.434745 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.74s 2026-04-09 04:12:08.434756 | orchestrator | Aggregate test results step one ----------------------------------------- 0.70s 2026-04-09 04:12:08.434767 | orchestrator | Print report file information ------------------------------------------- 0.63s 2026-04-09 04:12:08.434777 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.60s 2026-04-09 04:12:08.434788 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.58s 2026-04-09 04:12:08.434799 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.55s 2026-04-09 04:12:08.434810 | orchestrator | Prepare test data ------------------------------------------------------- 0.52s 2026-04-09 04:12:08.434821 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.52s 2026-04-09 04:12:08.434831 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.51s 2026-04-09 04:12:08.434842 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.49s 2026-04-09 04:12:08.434860 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.49s 2026-04-09 04:12:08.434871 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.38s 2026-04-09 04:12:08.787715 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-04-09 04:12:08.796338 | orchestrator | + set -e 2026-04-09 04:12:08.796527 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 04:12:08.796547 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 04:12:08.796559 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 04:12:08.796570 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-09 04:12:08.796581 | orchestrator | ++ CEPH_VERSION=reef 2026-04-09 04:12:08.796593 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 04:12:08.796605 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 04:12:08.796663 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-09 04:12:08.796675 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-09 04:12:08.796686 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-09 04:12:08.796697 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-09 04:12:08.796708 | orchestrator | ++ export ARA=false 2026-04-09 04:12:08.796719 | orchestrator | ++ ARA=false 2026-04-09 04:12:08.796730 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 04:12:08.796742 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 04:12:08.796752 | orchestrator | ++ export TEMPEST=false 2026-04-09 04:12:08.796763 | orchestrator | ++ TEMPEST=false 2026-04-09 04:12:08.796774 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 04:12:08.796785 | orchestrator | ++ IS_ZUUL=true 2026-04-09 04:12:08.796796 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 04:12:08.796808 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 04:12:08.796819 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 04:12:08.796829 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 04:12:08.796840 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 04:12:08.796851 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 04:12:08.796863 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 04:12:08.796873 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 04:12:08.796884 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 04:12:08.796895 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 04:12:08.796906 | orchestrator | + source /etc/os-release 2026-04-09 04:12:08.796917 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-04-09 04:12:08.796928 | orchestrator | ++ NAME=Ubuntu 2026-04-09 04:12:08.796939 | orchestrator | ++ VERSION_ID=24.04 2026-04-09 04:12:08.796949 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-04-09 04:12:08.796960 | orchestrator | ++ VERSION_CODENAME=noble 2026-04-09 04:12:08.796971 | orchestrator | ++ ID=ubuntu 2026-04-09 04:12:08.796982 | orchestrator | ++ ID_LIKE=debian 2026-04-09 04:12:08.796993 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-04-09 04:12:08.797004 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-04-09 04:12:08.797015 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-04-09 04:12:08.797025 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-04-09 04:12:08.797037 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-04-09 04:12:08.797048 | orchestrator | ++ LOGO=ubuntu-logo 2026-04-09 04:12:08.797059 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-04-09 04:12:08.797071 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-04-09 04:12:08.797094 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-09 04:12:08.828211 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-09 04:12:34.099269 | orchestrator | 2026-04-09 04:12:34.099379 | orchestrator | # Status of Elasticsearch 2026-04-09 04:12:34.099398 | orchestrator | 2026-04-09 04:12:34.099410 | orchestrator | + pushd /opt/configuration/contrib 2026-04-09 04:12:34.099423 | orchestrator | + echo 2026-04-09 04:12:34.099435 | orchestrator | + echo '# Status of Elasticsearch' 2026-04-09 04:12:34.099446 | orchestrator | + echo 2026-04-09 04:12:34.099457 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-04-09 04:12:34.306745 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-04-09 04:12:34.307027 | orchestrator | 2026-04-09 04:12:34.307054 | orchestrator | # Status of MariaDB 2026-04-09 04:12:34.307068 | orchestrator | 2026-04-09 04:12:34.307080 | orchestrator | + echo 2026-04-09 04:12:34.307091 | orchestrator | + echo '# Status of MariaDB' 2026-04-09 04:12:34.307102 | orchestrator | + echo 2026-04-09 04:12:34.308926 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-09 04:12:34.378845 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-09 04:12:34.378964 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-09 04:12:34.378992 | orchestrator | + MARIADB_USER=root_shard_0 2026-04-09 04:12:34.379012 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-04-09 04:12:34.468131 | orchestrator | Reading package lists... 2026-04-09 04:12:34.894691 | orchestrator | Building dependency tree... 2026-04-09 04:12:34.895648 | orchestrator | Reading state information... 2026-04-09 04:12:35.506283 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-04-09 04:12:35.506387 | orchestrator | bc set to manually installed. 2026-04-09 04:12:35.506405 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-04-09 04:12:36.267519 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-04-09 04:12:36.268418 | orchestrator | 2026-04-09 04:12:36.268522 | orchestrator | # Status of Prometheus 2026-04-09 04:12:36.268548 | orchestrator | 2026-04-09 04:12:36.268644 | orchestrator | + echo 2026-04-09 04:12:36.268659 | orchestrator | + echo '# Status of Prometheus' 2026-04-09 04:12:36.268671 | orchestrator | + echo 2026-04-09 04:12:36.268682 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-04-09 04:12:36.348175 | orchestrator | Unauthorized 2026-04-09 04:12:36.357304 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-04-09 04:12:36.419261 | orchestrator | Unauthorized 2026-04-09 04:12:36.423767 | orchestrator | 2026-04-09 04:12:36.423833 | orchestrator | # Status of RabbitMQ 2026-04-09 04:12:36.423847 | orchestrator | 2026-04-09 04:12:36.423859 | orchestrator | + echo 2026-04-09 04:12:36.423871 | orchestrator | + echo '# Status of RabbitMQ' 2026-04-09 04:12:36.423883 | orchestrator | + echo 2026-04-09 04:12:36.424497 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-09 04:12:36.491057 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-09 04:12:36.491146 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-09 04:12:36.491162 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-04-09 04:12:36.974119 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-04-09 04:12:36.991882 | orchestrator | 2026-04-09 04:12:36.991991 | orchestrator | # Status of Redis 2026-04-09 04:12:36.992012 | orchestrator | 2026-04-09 04:12:36.992031 | orchestrator | + echo 2026-04-09 04:12:36.992050 | orchestrator | + echo '# Status of Redis' 2026-04-09 04:12:36.992070 | orchestrator | + echo 2026-04-09 04:12:36.992088 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-04-09 04:12:36.999435 | orchestrator | TCP OK - 0.003 second response time on 192.168.16.10 port 6379|time=0.002676s;;;0.000000;10.000000 2026-04-09 04:12:37.000705 | orchestrator | 2026-04-09 04:12:37.000736 | orchestrator | # Create backup of MariaDB database 2026-04-09 04:12:37.000746 | orchestrator | 2026-04-09 04:12:37.000753 | orchestrator | + popd 2026-04-09 04:12:37.000760 | orchestrator | + echo 2026-04-09 04:12:37.000767 | orchestrator | + echo '# Create backup of MariaDB database' 2026-04-09 04:12:37.000775 | orchestrator | + echo 2026-04-09 04:12:37.000782 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-04-09 04:12:39.189397 | orchestrator | 2026-04-09 04:12:39 | INFO  | Task 8074a61d-96cc-44c3-a74e-93b9f878a566 (mariadb_backup) was prepared for execution. 2026-04-09 04:12:39.189524 | orchestrator | 2026-04-09 04:12:39 | INFO  | It takes a moment until task 8074a61d-96cc-44c3-a74e-93b9f878a566 (mariadb_backup) has been started and output is visible here. 2026-04-09 04:13:09.688851 | orchestrator | 2026-04-09 04:13:09.688964 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 04:13:09.688982 | orchestrator | 2026-04-09 04:13:09.689013 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 04:13:09.689026 | orchestrator | Thursday 09 April 2026 04:12:43 +0000 (0:00:00.194) 0:00:00.194 ******** 2026-04-09 04:13:09.689104 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:13:09.689141 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:13:09.689153 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:13:09.689164 | orchestrator | 2026-04-09 04:13:09.689175 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 04:13:09.689243 | orchestrator | Thursday 09 April 2026 04:12:44 +0000 (0:00:00.364) 0:00:00.559 ******** 2026-04-09 04:13:09.689255 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-09 04:13:09.689267 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-09 04:13:09.689278 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-09 04:13:09.689289 | orchestrator | 2026-04-09 04:13:09.689300 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-09 04:13:09.689311 | orchestrator | 2026-04-09 04:13:09.689322 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-09 04:13:09.689334 | orchestrator | Thursday 09 April 2026 04:12:44 +0000 (0:00:00.634) 0:00:01.193 ******** 2026-04-09 04:13:09.689345 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 04:13:09.689356 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-09 04:13:09.689367 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-09 04:13:09.689411 | orchestrator | 2026-04-09 04:13:09.689426 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-09 04:13:09.689447 | orchestrator | Thursday 09 April 2026 04:12:45 +0000 (0:00:00.449) 0:00:01.643 ******** 2026-04-09 04:13:09.689461 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:13:09.689541 | orchestrator | 2026-04-09 04:13:09.689576 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-04-09 04:13:09.689590 | orchestrator | Thursday 09 April 2026 04:12:45 +0000 (0:00:00.564) 0:00:02.208 ******** 2026-04-09 04:13:09.689604 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:13:09.689617 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:13:09.689630 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:13:09.689642 | orchestrator | 2026-04-09 04:13:09.689655 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-04-09 04:13:09.689668 | orchestrator | Thursday 09 April 2026 04:12:49 +0000 (0:00:03.626) 0:00:05.834 ******** 2026-04-09 04:13:09.689681 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-09 04:13:09.689693 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-09 04:13:09.689707 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-09 04:13:09.689720 | orchestrator | mariadb_bootstrap_restart 2026-04-09 04:13:09.689731 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:13:09.689742 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:13:09.689753 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:13:09.689764 | orchestrator | 2026-04-09 04:13:09.689775 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-09 04:13:09.689786 | orchestrator | skipping: no hosts matched 2026-04-09 04:13:09.689796 | orchestrator | 2026-04-09 04:13:09.689807 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-09 04:13:09.689818 | orchestrator | skipping: no hosts matched 2026-04-09 04:13:09.689829 | orchestrator | 2026-04-09 04:13:09.689840 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-09 04:13:09.689850 | orchestrator | skipping: no hosts matched 2026-04-09 04:13:09.689861 | orchestrator | 2026-04-09 04:13:09.689872 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-09 04:13:09.689883 | orchestrator | 2026-04-09 04:13:09.689893 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-09 04:13:09.689904 | orchestrator | Thursday 09 April 2026 04:13:08 +0000 (0:00:18.979) 0:00:24.813 ******** 2026-04-09 04:13:09.689915 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:13:09.689926 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:13:09.689947 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:13:09.689958 | orchestrator | 2026-04-09 04:13:09.689969 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-09 04:13:09.689980 | orchestrator | Thursday 09 April 2026 04:13:08 +0000 (0:00:00.326) 0:00:25.139 ******** 2026-04-09 04:13:09.689991 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:13:09.690002 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:13:09.690013 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:13:09.690085 | orchestrator | 2026-04-09 04:13:09.690096 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 04:13:09.690109 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 04:13:09.690121 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 04:13:09.690133 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 04:13:09.690144 | orchestrator | 2026-04-09 04:13:09.690155 | orchestrator | 2026-04-09 04:13:09.690166 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 04:13:09.690177 | orchestrator | Thursday 09 April 2026 04:13:09 +0000 (0:00:00.541) 0:00:25.681 ******** 2026-04-09 04:13:09.690188 | orchestrator | =============================================================================== 2026-04-09 04:13:09.690199 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 18.98s 2026-04-09 04:13:09.690230 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.63s 2026-04-09 04:13:09.690242 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-04-09 04:13:09.690252 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.56s 2026-04-09 04:13:09.690263 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.54s 2026-04-09 04:13:09.690274 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.45s 2026-04-09 04:13:09.690285 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2026-04-09 04:13:09.690296 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.33s 2026-04-09 04:13:10.177070 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-04-09 04:13:10.188948 | orchestrator | + set -e 2026-04-09 04:13:10.189260 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 04:13:10.189695 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 04:13:10.189813 | orchestrator | ++ INTERACTIVE=false 2026-04-09 04:13:10.189842 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 04:13:10.189861 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 04:13:10.190445 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-09 04:13:10.192021 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-09 04:13:10.200833 | orchestrator | 2026-04-09 04:13:10.200911 | orchestrator | # OpenStack endpoints 2026-04-09 04:13:10.200935 | orchestrator | 2026-04-09 04:13:10.200948 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-09 04:13:10.200959 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-09 04:13:10.200970 | orchestrator | + export OS_CLOUD=admin 2026-04-09 04:13:10.200981 | orchestrator | + OS_CLOUD=admin 2026-04-09 04:13:10.201004 | orchestrator | + echo 2026-04-09 04:13:10.201016 | orchestrator | + echo '# OpenStack endpoints' 2026-04-09 04:13:10.201027 | orchestrator | + echo 2026-04-09 04:13:10.201038 | orchestrator | + openstack endpoint list 2026-04-09 04:13:13.621306 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-09 04:13:13.621431 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-04-09 04:13:13.621452 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-09 04:13:13.621484 | orchestrator | | 01198f1f1b394395ae7d631ae91b24f0 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-04-09 04:13:13.621524 | orchestrator | | 1756fca98f3948f790ae1155e765f596 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-04-09 04:13:13.621536 | orchestrator | | 1d56884561b645b38550ba654ecdfeba | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-04-09 04:13:13.621545 | orchestrator | | 26748ab3034643b2aa9c617cc05e04c8 | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-04-09 04:13:13.621555 | orchestrator | | 3f09e594988e411eabb05aca176f9b46 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-04-09 04:13:13.621564 | orchestrator | | 450c268822ff4be986c9edb5bca19ec6 | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-04-09 04:13:13.621575 | orchestrator | | 494d04b6010c4407ba8bc5f6f9cccb12 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-04-09 04:13:13.621584 | orchestrator | | 4cff745d900c4b7fa107ca5e90f8fb00 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-04-09 04:13:13.621594 | orchestrator | | 581f9361869a4a60b70e42d1a38c1c15 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-04-09 04:13:13.621603 | orchestrator | | 5ee962acb09640e18e996431623b1b0d | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-09 04:13:13.621613 | orchestrator | | 63ee523ffba44f14b4e1b5eab789f703 | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-04-09 04:13:13.621623 | orchestrator | | 644c82f692a546769acb48c32227647a | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-04-09 04:13:13.621632 | orchestrator | | 75cf4d946075411b8387be645ae040de | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-04-09 04:13:13.621642 | orchestrator | | 795db67106ab465b907079d7a45fe954 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-04-09 04:13:13.621652 | orchestrator | | 7b71b8ed10ab4c72a7f2ca008bd7d34f | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-04-09 04:13:13.621661 | orchestrator | | 87c948e37ee449ef839d7b1b02f588f4 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-04-09 04:13:13.621671 | orchestrator | | 8ad355ad85694f4ca13519a184b0b55c | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-09 04:13:13.621680 | orchestrator | | 8cdc8e37d3ea4cb6a782dd109cb6fa63 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-04-09 04:13:13.621690 | orchestrator | | 8f3a0a6b4b374aa48e4a521db8b47058 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-04-09 04:13:13.621700 | orchestrator | | ad85df22d54940a7a8b2f3c30eca116a | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-04-09 04:13:13.621732 | orchestrator | | af3e9b6cc57f4edd8b6fa6427fe79dde | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-04-09 04:13:13.621749 | orchestrator | | be6bac4b55a04670b170964301d8247b | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-04-09 04:13:13.621759 | orchestrator | | bfd2168f7891445c8ca1c0f557a7cc14 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-04-09 04:13:13.621769 | orchestrator | | c5ffe25b1e194c9b978b6a0a9410d6ca | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-09 04:13:13.621778 | orchestrator | | eb1264ab1ed34ba9bc459033843eae33 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-04-09 04:13:13.621788 | orchestrator | | ef903724adcc4ed2bb3bdb0394e7a2af | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-09 04:13:13.621797 | orchestrator | | f0b7aba7fb2c4448aed52713173f4f62 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-09 04:13:13.621807 | orchestrator | | f0ecc2a665db4687ba89598f79a9c45d | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-09 04:13:13.621816 | orchestrator | | f233ca1285d1417c98046ccaf16f00fb | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-04-09 04:13:13.621826 | orchestrator | | f7523afcaa4c43a6a5d02636e6321eb5 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-04-09 04:13:13.621837 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-09 04:13:13.948005 | orchestrator | 2026-04-09 04:13:13.948121 | orchestrator | # Cinder 2026-04-09 04:13:13.948138 | orchestrator | 2026-04-09 04:13:13.948150 | orchestrator | + echo 2026-04-09 04:13:13.948162 | orchestrator | + echo '# Cinder' 2026-04-09 04:13:13.948173 | orchestrator | + echo 2026-04-09 04:13:13.948184 | orchestrator | + openstack volume service list 2026-04-09 04:13:16.756994 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-09 04:13:16.757105 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-04-09 04:13:16.757122 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-09 04:13:16.757135 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-09T04:13:15.000000 | 2026-04-09 04:13:16.757146 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-09T04:13:14.000000 | 2026-04-09 04:13:16.757157 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-09T04:13:15.000000 | 2026-04-09 04:13:16.757168 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-04-09T04:13:14.000000 | 2026-04-09 04:13:16.757179 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-04-09T04:13:13.000000 | 2026-04-09 04:13:16.757190 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-04-09T04:13:13.000000 | 2026-04-09 04:13:16.757201 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-04-09T04:13:09.000000 | 2026-04-09 04:13:16.757212 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-04-09T04:13:12.000000 | 2026-04-09 04:13:16.757249 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-04-09T04:13:12.000000 | 2026-04-09 04:13:16.757261 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-09 04:13:17.051354 | orchestrator | + echo 2026-04-09 04:13:17.051439 | orchestrator | 2026-04-09 04:13:17.051452 | orchestrator | # Neutron 2026-04-09 04:13:17.051462 | orchestrator | 2026-04-09 04:13:17.051472 | orchestrator | + echo '# Neutron' 2026-04-09 04:13:17.051483 | orchestrator | + echo 2026-04-09 04:13:17.051544 | orchestrator | + openstack network agent list 2026-04-09 04:13:19.878222 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-09 04:13:19.878328 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-04-09 04:13:19.878344 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-09 04:13:19.878356 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-04-09 04:13:19.878367 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-04-09 04:13:19.878398 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-04-09 04:13:19.878410 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-04-09 04:13:19.878421 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-04-09 04:13:19.878431 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-04-09 04:13:19.878442 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-09 04:13:19.878453 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-09 04:13:19.878463 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-09 04:13:19.878474 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-09 04:13:20.183120 | orchestrator | + openstack network service provider list 2026-04-09 04:13:22.898848 | orchestrator | +---------------+------+---------+ 2026-04-09 04:13:22.898965 | orchestrator | | Service Type | Name | Default | 2026-04-09 04:13:22.898980 | orchestrator | +---------------+------+---------+ 2026-04-09 04:13:22.898992 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-04-09 04:13:22.899003 | orchestrator | +---------------+------+---------+ 2026-04-09 04:13:23.209899 | orchestrator | 2026-04-09 04:13:23.209995 | orchestrator | + echo 2026-04-09 04:13:23.210827 | orchestrator | # Nova 2026-04-09 04:13:23.210868 | orchestrator | 2026-04-09 04:13:23.210882 | orchestrator | + echo '# Nova' 2026-04-09 04:13:23.210893 | orchestrator | + echo 2026-04-09 04:13:23.210905 | orchestrator | + openstack compute service list 2026-04-09 04:13:26.564858 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-09 04:13:26.565019 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-04-09 04:13:26.565032 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-09 04:13:26.565068 | orchestrator | | 31c51113-b50e-4615-80c2-a2afe4b5f2dc | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-09T04:13:21.000000 | 2026-04-09 04:13:26.565078 | orchestrator | | df662cc0-a598-4354-a0bb-ddc353663aa6 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-09T04:13:17.000000 | 2026-04-09 04:13:26.565087 | orchestrator | | 9b18bfeb-f5a8-4e14-b1f1-9399e2034902 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-09T04:13:18.000000 | 2026-04-09 04:13:26.565097 | orchestrator | | 0979890e-5870-420f-b954-98e42363db9c | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-04-09T04:13:17.000000 | 2026-04-09 04:13:26.565106 | orchestrator | | e0cbf10c-6786-48b0-8f70-5b76cf6fa7d5 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-04-09T04:13:19.000000 | 2026-04-09 04:13:26.565115 | orchestrator | | 32743f2d-6ac0-4639-a913-b612dcdeaf80 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-04-09T04:13:19.000000 | 2026-04-09 04:13:26.565124 | orchestrator | | 85262c24-7696-493c-8aae-3dd7b32dade9 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-04-09T04:13:24.000000 | 2026-04-09 04:13:26.565133 | orchestrator | | 4942290a-f485-4952-bfa7-f0e07c15f3ac | nova-compute | testbed-node-3 | nova | enabled | up | 2026-04-09T04:13:24.000000 | 2026-04-09 04:13:26.565141 | orchestrator | | 6b529cbb-0f2b-41ff-a263-2cca7c269cab | nova-compute | testbed-node-4 | nova | enabled | up | 2026-04-09T04:13:24.000000 | 2026-04-09 04:13:26.565150 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-09 04:13:26.965594 | orchestrator | + openstack hypervisor list 2026-04-09 04:13:29.804567 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-09 04:13:29.804699 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-04-09 04:13:29.804727 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-09 04:13:29.804748 | orchestrator | | cbe61515-e3c9-4d95-b064-c84b7292b51e | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-04-09 04:13:29.804768 | orchestrator | | 998ed68f-9357-49e9-872d-d4b4b5f51e4b | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-04-09 04:13:29.804786 | orchestrator | | fe125c42-7366-4004-bfac-b91e256bacce | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-04-09 04:13:29.804802 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-09 04:13:30.210359 | orchestrator | 2026-04-09 04:13:30.210456 | orchestrator | # Run OpenStack test play 2026-04-09 04:13:30.210546 | orchestrator | 2026-04-09 04:13:30.210560 | orchestrator | + echo 2026-04-09 04:13:30.210573 | orchestrator | + echo '# Run OpenStack test play' 2026-04-09 04:13:30.210590 | orchestrator | + echo 2026-04-09 04:13:30.210602 | orchestrator | + osism apply --environment openstack test 2026-04-09 04:13:32.440288 | orchestrator | 2026-04-09 04:13:32 | INFO  | Trying to run play test in environment openstack 2026-04-09 04:13:42.566742 | orchestrator | 2026-04-09 04:13:42 | INFO  | Task 97863864-ec7b-476a-813e-836775132f40 (test) was prepared for execution. 2026-04-09 04:13:42.566839 | orchestrator | 2026-04-09 04:13:42 | INFO  | It takes a moment until task 97863864-ec7b-476a-813e-836775132f40 (test) has been started and output is visible here. 2026-04-09 04:16:52.602440 | orchestrator | 2026-04-09 04:16:52.602546 | orchestrator | PLAY [Create test project] ***************************************************** 2026-04-09 04:16:52.602563 | orchestrator | 2026-04-09 04:16:52.602575 | orchestrator | TASK [Create test domain] ****************************************************** 2026-04-09 04:16:52.602587 | orchestrator | Thursday 09 April 2026 04:13:47 +0000 (0:00:00.109) 0:00:00.109 ******** 2026-04-09 04:16:52.602599 | orchestrator | changed: [localhost] 2026-04-09 04:16:52.602610 | orchestrator | 2026-04-09 04:16:52.602621 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-04-09 04:16:52.602632 | orchestrator | Thursday 09 April 2026 04:13:51 +0000 (0:00:03.954) 0:00:04.064 ******** 2026-04-09 04:16:52.602662 | orchestrator | changed: [localhost] 2026-04-09 04:16:52.602674 | orchestrator | 2026-04-09 04:16:52.602691 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-04-09 04:16:52.602711 | orchestrator | Thursday 09 April 2026 04:13:55 +0000 (0:00:04.227) 0:00:08.292 ******** 2026-04-09 04:16:52.602729 | orchestrator | changed: [localhost] 2026-04-09 04:16:52.602746 | orchestrator | 2026-04-09 04:16:52.602764 | orchestrator | TASK [Create test project] ***************************************************** 2026-04-09 04:16:52.602781 | orchestrator | Thursday 09 April 2026 04:14:02 +0000 (0:00:07.091) 0:00:15.384 ******** 2026-04-09 04:16:52.602799 | orchestrator | changed: [localhost] 2026-04-09 04:16:52.602817 | orchestrator | 2026-04-09 04:16:52.602837 | orchestrator | TASK [Create test user] ******************************************************** 2026-04-09 04:16:52.602856 | orchestrator | Thursday 09 April 2026 04:14:06 +0000 (0:00:04.114) 0:00:19.498 ******** 2026-04-09 04:16:52.602876 | orchestrator | changed: [localhost] 2026-04-09 04:16:52.602893 | orchestrator | 2026-04-09 04:16:52.602904 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-04-09 04:16:52.602915 | orchestrator | Thursday 09 April 2026 04:14:10 +0000 (0:00:04.376) 0:00:23.874 ******** 2026-04-09 04:16:52.602926 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-04-09 04:16:52.602938 | orchestrator | changed: [localhost] => (item=member) 2026-04-09 04:16:52.602949 | orchestrator | changed: [localhost] => (item=creator) 2026-04-09 04:16:52.602961 | orchestrator | 2026-04-09 04:16:52.602972 | orchestrator | TASK [Create test server group] ************************************************ 2026-04-09 04:16:52.602983 | orchestrator | Thursday 09 April 2026 04:14:22 +0000 (0:00:11.971) 0:00:35.846 ******** 2026-04-09 04:16:52.602994 | orchestrator | changed: [localhost] 2026-04-09 04:16:52.603006 | orchestrator | 2026-04-09 04:16:52.603036 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-04-09 04:16:52.603049 | orchestrator | Thursday 09 April 2026 04:14:27 +0000 (0:00:04.389) 0:00:40.235 ******** 2026-04-09 04:16:52.603062 | orchestrator | changed: [localhost] 2026-04-09 04:16:52.603074 | orchestrator | 2026-04-09 04:16:52.603087 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-04-09 04:16:52.603100 | orchestrator | Thursday 09 April 2026 04:14:32 +0000 (0:00:05.169) 0:00:45.405 ******** 2026-04-09 04:16:52.603138 | orchestrator | changed: [localhost] 2026-04-09 04:16:52.603152 | orchestrator | 2026-04-09 04:16:52.603165 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-04-09 04:16:52.603179 | orchestrator | Thursday 09 April 2026 04:14:36 +0000 (0:00:04.426) 0:00:49.832 ******** 2026-04-09 04:16:52.603191 | orchestrator | changed: [localhost] 2026-04-09 04:16:52.603204 | orchestrator | 2026-04-09 04:16:52.603217 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-04-09 04:16:52.603230 | orchestrator | Thursday 09 April 2026 04:14:40 +0000 (0:00:04.010) 0:00:53.842 ******** 2026-04-09 04:16:52.603242 | orchestrator | changed: [localhost] 2026-04-09 04:16:52.603255 | orchestrator | 2026-04-09 04:16:52.603268 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-04-09 04:16:52.603281 | orchestrator | Thursday 09 April 2026 04:14:45 +0000 (0:00:04.195) 0:00:58.038 ******** 2026-04-09 04:16:52.603294 | orchestrator | changed: [localhost] 2026-04-09 04:16:52.603306 | orchestrator | 2026-04-09 04:16:52.603319 | orchestrator | TASK [Create test networks] **************************************************** 2026-04-09 04:16:52.603331 | orchestrator | Thursday 09 April 2026 04:14:49 +0000 (0:00:03.987) 0:01:02.025 ******** 2026-04-09 04:16:52.603344 | orchestrator | changed: [localhost] => (item={'name': 'test-1'}) 2026-04-09 04:16:52.603356 | orchestrator | changed: [localhost] => (item={'name': 'test-2'}) 2026-04-09 04:16:52.603368 | orchestrator | changed: [localhost] => (item={'name': 'test-3'}) 2026-04-09 04:16:52.603378 | orchestrator | 2026-04-09 04:16:52.603390 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-04-09 04:16:52.603411 | orchestrator | Thursday 09 April 2026 04:15:02 +0000 (0:00:13.916) 0:01:15.942 ******** 2026-04-09 04:16:52.603422 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-04-09 04:16:52.603434 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-04-09 04:16:52.603445 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-04-09 04:16:52.603456 | orchestrator | 2026-04-09 04:16:52.603467 | orchestrator | TASK [Create test routers] ***************************************************** 2026-04-09 04:16:52.603478 | orchestrator | Thursday 09 April 2026 04:15:18 +0000 (0:00:16.025) 0:01:31.967 ******** 2026-04-09 04:16:52.603489 | orchestrator | changed: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-04-09 04:16:52.603504 | orchestrator | changed: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-04-09 04:16:52.603516 | orchestrator | changed: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-04-09 04:16:52.603527 | orchestrator | 2026-04-09 04:16:52.603538 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-04-09 04:16:52.603548 | orchestrator | 2026-04-09 04:16:52.603560 | orchestrator | TASK [Get test server group] *************************************************** 2026-04-09 04:16:52.603588 | orchestrator | Thursday 09 April 2026 04:15:51 +0000 (0:00:32.528) 0:02:04.496 ******** 2026-04-09 04:16:52.603601 | orchestrator | ok: [localhost] 2026-04-09 04:16:52.603612 | orchestrator | 2026-04-09 04:16:52.603623 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-04-09 04:16:52.603634 | orchestrator | Thursday 09 April 2026 04:15:55 +0000 (0:00:03.620) 0:02:08.116 ******** 2026-04-09 04:16:52.603644 | orchestrator | skipping: [localhost] 2026-04-09 04:16:52.603655 | orchestrator | 2026-04-09 04:16:52.603666 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-04-09 04:16:52.603677 | orchestrator | Thursday 09 April 2026 04:15:55 +0000 (0:00:00.059) 0:02:08.176 ******** 2026-04-09 04:16:52.603688 | orchestrator | skipping: [localhost] 2026-04-09 04:16:52.603699 | orchestrator | 2026-04-09 04:16:52.603709 | orchestrator | TASK [Delete test instances] *************************************************** 2026-04-09 04:16:52.603720 | orchestrator | Thursday 09 April 2026 04:15:55 +0000 (0:00:00.053) 0:02:08.230 ******** 2026-04-09 04:16:52.603731 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-04-09 04:16:52.603742 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-04-09 04:16:52.603753 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-04-09 04:16:52.603764 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-04-09 04:16:52.603774 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-04-09 04:16:52.603785 | orchestrator | skipping: [localhost] 2026-04-09 04:16:52.603796 | orchestrator | 2026-04-09 04:16:52.603807 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-04-09 04:16:52.603818 | orchestrator | Thursday 09 April 2026 04:15:55 +0000 (0:00:00.178) 0:02:08.408 ******** 2026-04-09 04:16:52.603828 | orchestrator | skipping: [localhost] 2026-04-09 04:16:52.603839 | orchestrator | 2026-04-09 04:16:52.603850 | orchestrator | TASK [Create test instances] *************************************************** 2026-04-09 04:16:52.603861 | orchestrator | Thursday 09 April 2026 04:15:55 +0000 (0:00:00.165) 0:02:08.574 ******** 2026-04-09 04:16:52.603872 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-09 04:16:52.603882 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-09 04:16:52.603893 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-09 04:16:52.603904 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-09 04:16:52.603921 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-09 04:16:52.603932 | orchestrator | 2026-04-09 04:16:52.603943 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-04-09 04:16:52.603954 | orchestrator | Thursday 09 April 2026 04:16:00 +0000 (0:00:05.242) 0:02:13.817 ******** 2026-04-09 04:16:52.603965 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-09 04:16:52.603977 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-04-09 04:16:52.603988 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-04-09 04:16:52.603999 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-04-09 04:16:52.604013 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j92911093270.3783', 'results_file': '/ansible/.ansible_async/j92911093270.3783', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-09 04:16:52.604027 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j435769402970.3808', 'results_file': '/ansible/.ansible_async/j435769402970.3808', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-09 04:16:52.604038 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j623456933307.3833', 'results_file': '/ansible/.ansible_async/j623456933307.3833', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-09 04:16:52.604050 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j372827135990.3858', 'results_file': '/ansible/.ansible_async/j372827135990.3858', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-09 04:16:52.604065 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j435953863842.3883', 'results_file': '/ansible/.ansible_async/j435953863842.3883', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-09 04:16:52.604077 | orchestrator | 2026-04-09 04:16:52.604088 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-04-09 04:16:52.604099 | orchestrator | Thursday 09 April 2026 04:16:47 +0000 (0:00:47.123) 0:03:00.940 ******** 2026-04-09 04:16:52.604156 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-09 04:16:52.604179 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-09 04:18:04.614582 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-09 04:18:04.615517 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-09 04:18:04.615545 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-09 04:18:04.615558 | orchestrator | 2026-04-09 04:18:04.615571 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-04-09 04:18:04.615583 | orchestrator | Thursday 09 April 2026 04:16:52 +0000 (0:00:04.681) 0:03:05.621 ******** 2026-04-09 04:18:04.615593 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-04-09 04:18:04.615608 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j287045007620.3979', 'results_file': '/ansible/.ansible_async/j287045007620.3979', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-09 04:18:04.615623 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j505373379484.4004', 'results_file': '/ansible/.ansible_async/j505373379484.4004', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-09 04:18:04.615657 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j766681881593.4029', 'results_file': '/ansible/.ansible_async/j766681881593.4029', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-09 04:18:04.615670 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j265158005837.4054', 'results_file': '/ansible/.ansible_async/j265158005837.4054', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-09 04:18:04.615681 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j140523621877.4086', 'results_file': '/ansible/.ansible_async/j140523621877.4086', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-09 04:18:04.615693 | orchestrator | 2026-04-09 04:18:04.615704 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-04-09 04:18:04.615716 | orchestrator | Thursday 09 April 2026 04:17:02 +0000 (0:00:09.571) 0:03:15.193 ******** 2026-04-09 04:18:04.615727 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-09 04:18:04.615738 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-09 04:18:04.615750 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-09 04:18:04.615761 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-09 04:18:04.615772 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-09 04:18:04.615783 | orchestrator | 2026-04-09 04:18:04.615796 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-04-09 04:18:04.615807 | orchestrator | Thursday 09 April 2026 04:17:07 +0000 (0:00:04.915) 0:03:20.109 ******** 2026-04-09 04:18:04.615819 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-04-09 04:18:04.615830 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j418813472075.4155', 'results_file': '/ansible/.ansible_async/j418813472075.4155', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-09 04:18:04.615842 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j567401120152.4180', 'results_file': '/ansible/.ansible_async/j567401120152.4180', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-09 04:18:04.615854 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j839989272215.4206', 'results_file': '/ansible/.ansible_async/j839989272215.4206', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-09 04:18:04.615879 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j48140569354.4232', 'results_file': '/ansible/.ansible_async/j48140569354.4232', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-09 04:18:04.615909 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j721383438369.4258', 'results_file': '/ansible/.ansible_async/j721383438369.4258', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-09 04:18:04.615921 | orchestrator | 2026-04-09 04:18:04.615932 | orchestrator | TASK [Create test volume] ****************************************************** 2026-04-09 04:18:04.615943 | orchestrator | Thursday 09 April 2026 04:17:17 +0000 (0:00:10.537) 0:03:30.646 ******** 2026-04-09 04:18:04.615955 | orchestrator | changed: [localhost] 2026-04-09 04:18:04.615975 | orchestrator | 2026-04-09 04:18:04.615986 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-04-09 04:18:04.615997 | orchestrator | Thursday 09 April 2026 04:17:24 +0000 (0:00:06.835) 0:03:37.482 ******** 2026-04-09 04:18:04.616007 | orchestrator | changed: [localhost] 2026-04-09 04:18:04.616036 | orchestrator | 2026-04-09 04:18:04.616048 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-04-09 04:18:04.616059 | orchestrator | Thursday 09 April 2026 04:17:38 +0000 (0:00:13.873) 0:03:51.355 ******** 2026-04-09 04:18:04.616071 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-09 04:18:04.616083 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-09 04:18:04.616094 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-09 04:18:04.616105 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-09 04:18:04.616117 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-09 04:18:04.616128 | orchestrator | 2026-04-09 04:18:04.616139 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-04-09 04:18:04.616151 | orchestrator | Thursday 09 April 2026 04:18:04 +0000 (0:00:25.869) 0:04:17.224 ******** 2026-04-09 04:18:04.616162 | orchestrator | ok: [localhost] => (item=test) => { 2026-04-09 04:18:04.616174 | orchestrator |  "msg": "test: 192.168.112.181" 2026-04-09 04:18:04.616186 | orchestrator | } 2026-04-09 04:18:04.616197 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-04-09 04:18:04.616209 | orchestrator |  "msg": "test-1: 192.168.112.116" 2026-04-09 04:18:04.616221 | orchestrator | } 2026-04-09 04:18:04.616232 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-04-09 04:18:04.616244 | orchestrator |  "msg": "test-2: 192.168.112.137" 2026-04-09 04:18:04.616255 | orchestrator | } 2026-04-09 04:18:04.616266 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-04-09 04:18:04.616277 | orchestrator |  "msg": "test-3: 192.168.112.125" 2026-04-09 04:18:04.616289 | orchestrator | } 2026-04-09 04:18:04.616300 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-04-09 04:18:04.616312 | orchestrator |  "msg": "test-4: 192.168.112.154" 2026-04-09 04:18:04.616323 | orchestrator | } 2026-04-09 04:18:04.616334 | orchestrator | 2026-04-09 04:18:04.616346 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 04:18:04.616358 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 04:18:04.616371 | orchestrator | 2026-04-09 04:18:04.616383 | orchestrator | 2026-04-09 04:18:04.616394 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 04:18:04.616405 | orchestrator | Thursday 09 April 2026 04:18:04 +0000 (0:00:00.116) 0:04:17.340 ******** 2026-04-09 04:18:04.616416 | orchestrator | =============================================================================== 2026-04-09 04:18:04.616425 | orchestrator | Wait for instance creation to complete --------------------------------- 47.12s 2026-04-09 04:18:04.616435 | orchestrator | Create test routers ---------------------------------------------------- 32.53s 2026-04-09 04:18:04.616447 | orchestrator | Create floating ip addresses ------------------------------------------- 25.87s 2026-04-09 04:18:04.616457 | orchestrator | Create test subnets ---------------------------------------------------- 16.03s 2026-04-09 04:18:04.616468 | orchestrator | Create test networks --------------------------------------------------- 13.92s 2026-04-09 04:18:04.616479 | orchestrator | Attach test volume ----------------------------------------------------- 13.87s 2026-04-09 04:18:04.616489 | orchestrator | Add member roles to user test ------------------------------------------ 11.97s 2026-04-09 04:18:04.616501 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.54s 2026-04-09 04:18:04.616512 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.57s 2026-04-09 04:18:04.616523 | orchestrator | Add manager role to user test-admin ------------------------------------- 7.09s 2026-04-09 04:18:04.616550 | orchestrator | Create test volume ------------------------------------------------------ 6.84s 2026-04-09 04:18:04.616561 | orchestrator | Create test instances --------------------------------------------------- 5.24s 2026-04-09 04:18:04.616572 | orchestrator | Create ssh security group ----------------------------------------------- 5.17s 2026-04-09 04:18:04.616582 | orchestrator | Add tag to instances ---------------------------------------------------- 4.92s 2026-04-09 04:18:04.616592 | orchestrator | Add metadata to instances ----------------------------------------------- 4.68s 2026-04-09 04:18:04.616603 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.43s 2026-04-09 04:18:04.616613 | orchestrator | Create test server group ------------------------------------------------ 4.39s 2026-04-09 04:18:04.616624 | orchestrator | Create test user -------------------------------------------------------- 4.38s 2026-04-09 04:18:04.616636 | orchestrator | Create test-admin user -------------------------------------------------- 4.23s 2026-04-09 04:18:04.616654 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.20s 2026-04-09 04:18:05.089431 | orchestrator | + server_list 2026-04-09 04:18:05.089510 | orchestrator | + openstack --os-cloud test server list 2026-04-09 04:18:09.150766 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-09 04:18:09.150884 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-04-09 04:18:09.150899 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-09 04:18:09.150911 | orchestrator | | ba1519ae-7195-464e-8d49-84a6a2a905b7 | test-4 | ACTIVE | test-3=192.168.112.154, 192.168.202.116 | N/A (booted from volume) | SCS-1L-1 | 2026-04-09 04:18:09.150923 | orchestrator | | a4932245-1532-4210-8014-5f31e894606e | test-3 | ACTIVE | test-2=192.168.112.125, 192.168.201.151 | N/A (booted from volume) | SCS-1L-1 | 2026-04-09 04:18:09.150934 | orchestrator | | 0c6b4412-9812-4152-9c50-5ceb4b4c1853 | test-1 | ACTIVE | test-1=192.168.112.116, 192.168.200.93 | N/A (booted from volume) | SCS-1L-1 | 2026-04-09 04:18:09.150945 | orchestrator | | 9e5b4591-beda-4eff-9b2d-a5e9a114870a | test-2 | ACTIVE | test-2=192.168.112.137, 192.168.201.214 | N/A (booted from volume) | SCS-1L-1 | 2026-04-09 04:18:09.150956 | orchestrator | | 61747fe2-2e4c-4212-9d39-9cc958a5fad0 | test | ACTIVE | test-1=192.168.112.181, 192.168.200.66 | N/A (booted from volume) | SCS-1L-1 | 2026-04-09 04:18:09.150967 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-09 04:18:09.571477 | orchestrator | + openstack --os-cloud test server show test 2026-04-09 04:18:13.017741 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 04:18:13.017874 | orchestrator | | Field | Value | 2026-04-09 04:18:13.017894 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 04:18:13.017925 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-09 04:18:13.017938 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-09 04:18:13.017949 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-09 04:18:13.017961 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-04-09 04:18:13.017972 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-09 04:18:13.017983 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-09 04:18:13.018170 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-09 04:18:13.018205 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-09 04:18:13.018270 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-09 04:18:13.018289 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-09 04:18:13.018324 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-09 04:18:13.018350 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-09 04:18:13.018363 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-09 04:18:13.018381 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-09 04:18:13.018395 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-09 04:18:13.018408 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-09T04:16:31.000000 | 2026-04-09 04:18:13.018430 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-09 04:18:13.018442 | orchestrator | | accessIPv4 | | 2026-04-09 04:18:13.018453 | orchestrator | | accessIPv6 | | 2026-04-09 04:18:13.018472 | orchestrator | | addresses | test-1=192.168.112.181, 192.168.200.66 | 2026-04-09 04:18:13.018483 | orchestrator | | config_drive | | 2026-04-09 04:18:13.018494 | orchestrator | | created | 2026-04-09T04:16:04Z | 2026-04-09 04:18:13.018505 | orchestrator | | description | None | 2026-04-09 04:18:13.018521 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-09 04:18:13.018532 | orchestrator | | hostId | e6a4f3ec81f73649a6d605835ca17df687b1765525d632ba291ccfd0 | 2026-04-09 04:18:13.018543 | orchestrator | | host_status | None | 2026-04-09 04:18:13.018563 | orchestrator | | id | 61747fe2-2e4c-4212-9d39-9cc958a5fad0 | 2026-04-09 04:18:13.018574 | orchestrator | | image | N/A (booted from volume) | 2026-04-09 04:18:13.018592 | orchestrator | | key_name | test | 2026-04-09 04:18:13.018603 | orchestrator | | locked | False | 2026-04-09 04:18:13.018614 | orchestrator | | locked_reason | None | 2026-04-09 04:18:13.018625 | orchestrator | | name | test | 2026-04-09 04:18:13.018636 | orchestrator | | pinned_availability_zone | None | 2026-04-09 04:18:13.018652 | orchestrator | | progress | 0 | 2026-04-09 04:18:13.018663 | orchestrator | | project_id | ca7e15a8f29a424baf84073e4157a711 | 2026-04-09 04:18:13.018674 | orchestrator | | properties | hostname='test' | 2026-04-09 04:18:13.018692 | orchestrator | | security_groups | name='ssh' | 2026-04-09 04:18:13.018710 | orchestrator | | | name='icmp' | 2026-04-09 04:18:13.018721 | orchestrator | | server_groups | None | 2026-04-09 04:18:13.018732 | orchestrator | | status | ACTIVE | 2026-04-09 04:18:13.018743 | orchestrator | | tags | test | 2026-04-09 04:18:13.018754 | orchestrator | | trusted_image_certificates | None | 2026-04-09 04:18:13.018765 | orchestrator | | updated | 2026-04-09T04:16:54Z | 2026-04-09 04:18:13.018786 | orchestrator | | user_id | 6835449df2ee44bd8d6d112412f3f2ea | 2026-04-09 04:18:13.018798 | orchestrator | | volumes_attached | delete_on_termination='True', id='d03324af-d95d-45d2-b3ed-3b75cdb94dcf' | 2026-04-09 04:18:13.018809 | orchestrator | | | delete_on_termination='False', id='e352b469-8578-444e-a62d-fced9b687e85' | 2026-04-09 04:18:13.025078 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 04:18:13.351481 | orchestrator | + openstack --os-cloud test server show test-1 2026-04-09 04:18:16.496244 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 04:18:16.496337 | orchestrator | | Field | Value | 2026-04-09 04:18:16.496350 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 04:18:16.496360 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-09 04:18:16.496369 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-09 04:18:16.496394 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-09 04:18:16.496404 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-04-09 04:18:16.496412 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-09 04:18:16.496420 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-09 04:18:16.496463 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-09 04:18:16.496473 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-09 04:18:16.496481 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-09 04:18:16.496489 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-09 04:18:16.496497 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-09 04:18:16.496505 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-09 04:18:16.496514 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-09 04:18:16.496522 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-09 04:18:16.496530 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-09 04:18:16.496544 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-09T04:16:31.000000 | 2026-04-09 04:18:16.496558 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-09 04:18:16.496566 | orchestrator | | accessIPv4 | | 2026-04-09 04:18:16.496574 | orchestrator | | accessIPv6 | | 2026-04-09 04:18:16.496583 | orchestrator | | addresses | test-1=192.168.112.116, 192.168.200.93 | 2026-04-09 04:18:16.496591 | orchestrator | | config_drive | | 2026-04-09 04:18:16.496606 | orchestrator | | created | 2026-04-09T04:16:06Z | 2026-04-09 04:18:16.496618 | orchestrator | | description | None | 2026-04-09 04:18:16.496627 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-09 04:18:16.496641 | orchestrator | | hostId | e6a4f3ec81f73649a6d605835ca17df687b1765525d632ba291ccfd0 | 2026-04-09 04:18:16.496649 | orchestrator | | host_status | None | 2026-04-09 04:18:16.496663 | orchestrator | | id | 0c6b4412-9812-4152-9c50-5ceb4b4c1853 | 2026-04-09 04:18:16.496672 | orchestrator | | image | N/A (booted from volume) | 2026-04-09 04:18:16.496680 | orchestrator | | key_name | test | 2026-04-09 04:18:16.496688 | orchestrator | | locked | False | 2026-04-09 04:18:16.496696 | orchestrator | | locked_reason | None | 2026-04-09 04:18:16.496704 | orchestrator | | name | test-1 | 2026-04-09 04:18:16.496717 | orchestrator | | pinned_availability_zone | None | 2026-04-09 04:18:16.496730 | orchestrator | | progress | 0 | 2026-04-09 04:18:16.496739 | orchestrator | | project_id | ca7e15a8f29a424baf84073e4157a711 | 2026-04-09 04:18:16.496747 | orchestrator | | properties | hostname='test-1' | 2026-04-09 04:18:16.496761 | orchestrator | | security_groups | name='ssh' | 2026-04-09 04:18:16.496769 | orchestrator | | | name='icmp' | 2026-04-09 04:18:16.496777 | orchestrator | | server_groups | None | 2026-04-09 04:18:16.496785 | orchestrator | | status | ACTIVE | 2026-04-09 04:18:16.496794 | orchestrator | | tags | test | 2026-04-09 04:18:16.496802 | orchestrator | | trusted_image_certificates | None | 2026-04-09 04:18:16.496814 | orchestrator | | updated | 2026-04-09T04:16:54Z | 2026-04-09 04:18:16.496827 | orchestrator | | user_id | 6835449df2ee44bd8d6d112412f3f2ea | 2026-04-09 04:18:16.496836 | orchestrator | | volumes_attached | delete_on_termination='True', id='3effe481-0984-41be-85c1-c77244e7c318' | 2026-04-09 04:18:16.500540 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 04:18:16.761396 | orchestrator | + openstack --os-cloud test server show test-2 2026-04-09 04:18:19.878768 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 04:18:19.878883 | orchestrator | | Field | Value | 2026-04-09 04:18:19.878901 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 04:18:19.878913 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-09 04:18:19.878925 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-09 04:18:19.878937 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-09 04:18:19.878987 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-04-09 04:18:19.879083 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-09 04:18:19.879107 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-09 04:18:19.879139 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-09 04:18:19.879151 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-09 04:18:19.879162 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-09 04:18:19.879174 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-09 04:18:19.879185 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-09 04:18:19.879196 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-09 04:18:19.879218 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-09 04:18:19.879278 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-09 04:18:19.879305 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-09 04:18:19.879324 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-09T04:16:32.000000 | 2026-04-09 04:18:19.879358 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-09 04:18:19.879378 | orchestrator | | accessIPv4 | | 2026-04-09 04:18:19.879393 | orchestrator | | accessIPv6 | | 2026-04-09 04:18:19.879406 | orchestrator | | addresses | test-2=192.168.112.137, 192.168.201.214 | 2026-04-09 04:18:19.879419 | orchestrator | | config_drive | | 2026-04-09 04:18:19.879440 | orchestrator | | created | 2026-04-09T04:16:06Z | 2026-04-09 04:18:19.879453 | orchestrator | | description | None | 2026-04-09 04:18:19.879471 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-09 04:18:19.879485 | orchestrator | | hostId | f57f87be6086d84d015b9887a3e67d1e2733d74d53d33b9070d2b1e6 | 2026-04-09 04:18:19.879498 | orchestrator | | host_status | None | 2026-04-09 04:18:19.879519 | orchestrator | | id | 9e5b4591-beda-4eff-9b2d-a5e9a114870a | 2026-04-09 04:18:19.879533 | orchestrator | | image | N/A (booted from volume) | 2026-04-09 04:18:19.879546 | orchestrator | | key_name | test | 2026-04-09 04:18:19.879559 | orchestrator | | locked | False | 2026-04-09 04:18:19.879578 | orchestrator | | locked_reason | None | 2026-04-09 04:18:19.879591 | orchestrator | | name | test-2 | 2026-04-09 04:18:19.879604 | orchestrator | | pinned_availability_zone | None | 2026-04-09 04:18:19.879617 | orchestrator | | progress | 0 | 2026-04-09 04:18:19.879631 | orchestrator | | project_id | ca7e15a8f29a424baf84073e4157a711 | 2026-04-09 04:18:19.879645 | orchestrator | | properties | hostname='test-2' | 2026-04-09 04:18:19.879664 | orchestrator | | security_groups | name='ssh' | 2026-04-09 04:18:19.879675 | orchestrator | | | name='icmp' | 2026-04-09 04:18:19.879686 | orchestrator | | server_groups | None | 2026-04-09 04:18:19.880213 | orchestrator | | status | ACTIVE | 2026-04-09 04:18:19.880266 | orchestrator | | tags | test | 2026-04-09 04:18:19.880278 | orchestrator | | trusted_image_certificates | None | 2026-04-09 04:18:19.880289 | orchestrator | | updated | 2026-04-09T04:16:55Z | 2026-04-09 04:18:19.880300 | orchestrator | | user_id | 6835449df2ee44bd8d6d112412f3f2ea | 2026-04-09 04:18:19.880311 | orchestrator | | volumes_attached | delete_on_termination='True', id='ee4965c4-28f4-460c-80fe-7433de8c29bd' | 2026-04-09 04:18:19.885093 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 04:18:20.327275 | orchestrator | + openstack --os-cloud test server show test-3 2026-04-09 04:18:23.397874 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 04:18:23.397962 | orchestrator | | Field | Value | 2026-04-09 04:18:23.397973 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 04:18:23.398063 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-09 04:18:23.398074 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-09 04:18:23.398081 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-09 04:18:23.398088 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-04-09 04:18:23.398095 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-09 04:18:23.398102 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-09 04:18:23.398123 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-09 04:18:23.398130 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-09 04:18:23.398137 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-09 04:18:23.398150 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-09 04:18:23.398161 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-09 04:18:23.398168 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-09 04:18:23.398175 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-09 04:18:23.398182 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-09 04:18:23.398189 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-09 04:18:23.398196 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-09T04:16:32.000000 | 2026-04-09 04:18:23.398208 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-09 04:18:23.398216 | orchestrator | | accessIPv4 | | 2026-04-09 04:18:23.398238 | orchestrator | | accessIPv6 | | 2026-04-09 04:18:23.398245 | orchestrator | | addresses | test-2=192.168.112.125, 192.168.201.151 | 2026-04-09 04:18:23.398255 | orchestrator | | config_drive | | 2026-04-09 04:18:23.398263 | orchestrator | | created | 2026-04-09T04:16:08Z | 2026-04-09 04:18:23.398270 | orchestrator | | description | None | 2026-04-09 04:18:23.398277 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-09 04:18:23.398284 | orchestrator | | hostId | f57f87be6086d84d015b9887a3e67d1e2733d74d53d33b9070d2b1e6 | 2026-04-09 04:18:23.398291 | orchestrator | | host_status | None | 2026-04-09 04:18:23.398303 | orchestrator | | id | a4932245-1532-4210-8014-5f31e894606e | 2026-04-09 04:18:23.398314 | orchestrator | | image | N/A (booted from volume) | 2026-04-09 04:18:23.398322 | orchestrator | | key_name | test | 2026-04-09 04:18:23.398329 | orchestrator | | locked | False | 2026-04-09 04:18:23.398339 | orchestrator | | locked_reason | None | 2026-04-09 04:18:23.398347 | orchestrator | | name | test-3 | 2026-04-09 04:18:23.398354 | orchestrator | | pinned_availability_zone | None | 2026-04-09 04:18:23.398360 | orchestrator | | progress | 0 | 2026-04-09 04:18:23.398367 | orchestrator | | project_id | ca7e15a8f29a424baf84073e4157a711 | 2026-04-09 04:18:23.398374 | orchestrator | | properties | hostname='test-3' | 2026-04-09 04:18:23.398386 | orchestrator | | security_groups | name='ssh' | 2026-04-09 04:18:23.398397 | orchestrator | | | name='icmp' | 2026-04-09 04:18:23.398405 | orchestrator | | server_groups | None | 2026-04-09 04:18:23.398412 | orchestrator | | status | ACTIVE | 2026-04-09 04:18:23.398422 | orchestrator | | tags | test | 2026-04-09 04:18:23.398429 | orchestrator | | trusted_image_certificates | None | 2026-04-09 04:18:23.398436 | orchestrator | | updated | 2026-04-09T04:16:56Z | 2026-04-09 04:18:23.398445 | orchestrator | | user_id | 6835449df2ee44bd8d6d112412f3f2ea | 2026-04-09 04:18:23.398453 | orchestrator | | volumes_attached | delete_on_termination='True', id='837050b0-ef15-43e5-9d37-e3f9710f264f' | 2026-04-09 04:18:23.402352 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 04:18:23.685362 | orchestrator | + openstack --os-cloud test server show test-4 2026-04-09 04:18:27.108583 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 04:18:27.108688 | orchestrator | | Field | Value | 2026-04-09 04:18:27.108706 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 04:18:27.108718 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-09 04:18:27.108748 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-09 04:18:27.108760 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-09 04:18:27.108771 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-04-09 04:18:27.108782 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-09 04:18:27.108793 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-09 04:18:27.108843 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-09 04:18:27.108857 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-09 04:18:27.108869 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-09 04:18:27.108880 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-09 04:18:27.108891 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-09 04:18:27.108903 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-09 04:18:27.108914 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-09 04:18:27.108926 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-09 04:18:27.108937 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-09 04:18:27.108968 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-09T04:16:31.000000 | 2026-04-09 04:18:27.108987 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-09 04:18:27.109110 | orchestrator | | accessIPv4 | | 2026-04-09 04:18:27.109133 | orchestrator | | accessIPv6 | | 2026-04-09 04:18:27.109145 | orchestrator | | addresses | test-3=192.168.112.154, 192.168.202.116 | 2026-04-09 04:18:27.109156 | orchestrator | | config_drive | | 2026-04-09 04:18:27.109172 | orchestrator | | created | 2026-04-09T04:16:09Z | 2026-04-09 04:18:27.109183 | orchestrator | | description | None | 2026-04-09 04:18:27.109194 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-09 04:18:27.109205 | orchestrator | | hostId | e6a4f3ec81f73649a6d605835ca17df687b1765525d632ba291ccfd0 | 2026-04-09 04:18:27.109225 | orchestrator | | host_status | None | 2026-04-09 04:18:27.109247 | orchestrator | | id | ba1519ae-7195-464e-8d49-84a6a2a905b7 | 2026-04-09 04:18:27.109259 | orchestrator | | image | N/A (booted from volume) | 2026-04-09 04:18:27.109270 | orchestrator | | key_name | test | 2026-04-09 04:18:27.109281 | orchestrator | | locked | False | 2026-04-09 04:18:27.109297 | orchestrator | | locked_reason | None | 2026-04-09 04:18:27.109309 | orchestrator | | name | test-4 | 2026-04-09 04:18:27.109320 | orchestrator | | pinned_availability_zone | None | 2026-04-09 04:18:27.109331 | orchestrator | | progress | 0 | 2026-04-09 04:18:27.109349 | orchestrator | | project_id | ca7e15a8f29a424baf84073e4157a711 | 2026-04-09 04:18:27.109360 | orchestrator | | properties | hostname='test-4' | 2026-04-09 04:18:27.109379 | orchestrator | | security_groups | name='ssh' | 2026-04-09 04:18:27.109390 | orchestrator | | | name='icmp' | 2026-04-09 04:18:27.109401 | orchestrator | | server_groups | None | 2026-04-09 04:18:27.109413 | orchestrator | | status | ACTIVE | 2026-04-09 04:18:27.109429 | orchestrator | | tags | test | 2026-04-09 04:18:27.109440 | orchestrator | | trusted_image_certificates | None | 2026-04-09 04:18:27.109451 | orchestrator | | updated | 2026-04-09T04:16:56Z | 2026-04-09 04:18:27.109469 | orchestrator | | user_id | 6835449df2ee44bd8d6d112412f3f2ea | 2026-04-09 04:18:27.109480 | orchestrator | | volumes_attached | delete_on_termination='True', id='62db43ed-06ba-4861-9d62-b2ace55c75c3' | 2026-04-09 04:18:27.113163 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 04:18:27.421052 | orchestrator | + server_ping 2026-04-09 04:18:27.422127 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-09 04:18:27.422237 | orchestrator | ++ tr -d '\r' 2026-04-09 04:18:30.373272 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 04:18:30.373349 | orchestrator | + ping -c3 192.168.112.154 2026-04-09 04:18:30.387151 | orchestrator | PING 192.168.112.154 (192.168.112.154) 56(84) bytes of data. 2026-04-09 04:18:30.387227 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=1 ttl=63 time=7.19 ms 2026-04-09 04:18:31.384314 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=2 ttl=63 time=2.66 ms 2026-04-09 04:18:32.386801 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=3 ttl=63 time=2.35 ms 2026-04-09 04:18:32.387849 | orchestrator | 2026-04-09 04:18:32.387921 | orchestrator | --- 192.168.112.154 ping statistics --- 2026-04-09 04:18:32.387936 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-09 04:18:32.387947 | orchestrator | rtt min/avg/max/mdev = 2.353/4.065/7.189/2.211 ms 2026-04-09 04:18:32.387958 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 04:18:32.387969 | orchestrator | + ping -c3 192.168.112.116 2026-04-09 04:18:32.401638 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2026-04-09 04:18:32.401748 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=9.44 ms 2026-04-09 04:18:33.396444 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=2.48 ms 2026-04-09 04:18:34.398400 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=2.08 ms 2026-04-09 04:18:34.398479 | orchestrator | 2026-04-09 04:18:34.398490 | orchestrator | --- 192.168.112.116 ping statistics --- 2026-04-09 04:18:34.398500 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-09 04:18:34.398507 | orchestrator | rtt min/avg/max/mdev = 2.076/4.662/9.436/3.379 ms 2026-04-09 04:18:34.398516 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 04:18:34.398524 | orchestrator | + ping -c3 192.168.112.137 2026-04-09 04:18:34.412158 | orchestrator | PING 192.168.112.137 (192.168.112.137) 56(84) bytes of data. 2026-04-09 04:18:34.412229 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=1 ttl=63 time=8.70 ms 2026-04-09 04:18:35.407552 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=2 ttl=63 time=2.11 ms 2026-04-09 04:18:36.409399 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=3 ttl=63 time=1.93 ms 2026-04-09 04:18:36.409477 | orchestrator | 2026-04-09 04:18:36.409492 | orchestrator | --- 192.168.112.137 ping statistics --- 2026-04-09 04:18:36.409529 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 04:18:36.409540 | orchestrator | rtt min/avg/max/mdev = 1.934/4.248/8.698/3.147 ms 2026-04-09 04:18:36.409551 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 04:18:36.409561 | orchestrator | + ping -c3 192.168.112.125 2026-04-09 04:18:36.423201 | orchestrator | PING 192.168.112.125 (192.168.112.125) 56(84) bytes of data. 2026-04-09 04:18:36.423285 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=1 ttl=63 time=8.82 ms 2026-04-09 04:18:37.417723 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=2 ttl=63 time=2.26 ms 2026-04-09 04:18:38.419341 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=3 ttl=63 time=1.89 ms 2026-04-09 04:18:38.419426 | orchestrator | 2026-04-09 04:18:38.419438 | orchestrator | --- 192.168.112.125 ping statistics --- 2026-04-09 04:18:38.419449 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 04:18:38.419458 | orchestrator | rtt min/avg/max/mdev = 1.886/4.320/8.815/3.181 ms 2026-04-09 04:18:38.422546 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 04:18:38.422589 | orchestrator | + ping -c3 192.168.112.181 2026-04-09 04:18:38.438640 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2026-04-09 04:18:38.438730 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=8.10 ms 2026-04-09 04:18:39.434725 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=2.52 ms 2026-04-09 04:18:40.436155 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=1.57 ms 2026-04-09 04:18:40.436283 | orchestrator | 2026-04-09 04:18:40.436303 | orchestrator | --- 192.168.112.181 ping statistics --- 2026-04-09 04:18:40.436318 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 04:18:40.436330 | orchestrator | rtt min/avg/max/mdev = 1.573/4.063/8.098/2.879 ms 2026-04-09 04:18:40.436342 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-09 04:18:40.786476 | orchestrator | ok: Runtime: 0:09:12.771060 2026-04-09 04:18:40.872462 | 2026-04-09 04:18:40.872701 | TASK [Run tempest] 2026-04-09 04:18:41.412092 | orchestrator | skipping: Conditional result was False 2026-04-09 04:18:41.422165 | 2026-04-09 04:18:41.422297 | TASK [Check prometheus alert status] 2026-04-09 04:18:41.956536 | orchestrator | skipping: Conditional result was False 2026-04-09 04:18:41.963989 | 2026-04-09 04:18:41.964118 | PLAY [Upgrade testbed] 2026-04-09 04:18:41.972339 | 2026-04-09 04:18:41.972445 | TASK [Print next ceph version] 2026-04-09 04:18:42.050990 | orchestrator | ok 2026-04-09 04:18:42.060462 | 2026-04-09 04:18:42.060587 | TASK [Print next openstack version] 2026-04-09 04:18:42.125524 | orchestrator | ok 2026-04-09 04:18:42.135755 | 2026-04-09 04:18:42.135875 | TASK [Print next manager version] 2026-04-09 04:18:42.199259 | orchestrator | ok 2026-04-09 04:18:42.207732 | 2026-04-09 04:18:42.207869 | TASK [Set cloud fact (Zuul deployment)] 2026-04-09 04:18:42.266276 | orchestrator | ok 2026-04-09 04:18:42.277618 | 2026-04-09 04:18:42.277742 | TASK [Set cloud fact (local deployment)] 2026-04-09 04:18:42.313813 | orchestrator | skipping: Conditional result was False 2026-04-09 04:18:42.331515 | 2026-04-09 04:18:42.331667 | TASK [Fetch manager address] 2026-04-09 04:18:42.644049 | orchestrator | ok 2026-04-09 04:18:42.652359 | 2026-04-09 04:18:42.652484 | TASK [Set manager_host address] 2026-04-09 04:18:42.733934 | orchestrator | ok 2026-04-09 04:18:42.746087 | 2026-04-09 04:18:42.746225 | TASK [Run upgrade] 2026-04-09 04:18:43.469065 | orchestrator | + set -e 2026-04-09 04:18:43.469246 | orchestrator | + export MANAGER_VERSION=10.0.0 2026-04-09 04:18:43.469272 | orchestrator | + MANAGER_VERSION=10.0.0 2026-04-09 04:18:43.469286 | orchestrator | + CEPH_VERSION=reef 2026-04-09 04:18:43.469299 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-04-09 04:18:43.469310 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-04-09 04:18:43.469323 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0 reef 2024.2 kolla/release' 2026-04-09 04:18:43.474669 | orchestrator | + set -e 2026-04-09 04:18:43.474738 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 04:18:43.475033 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 04:18:43.475067 | orchestrator | ++ INTERACTIVE=false 2026-04-09 04:18:43.475082 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 04:18:43.475101 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 04:18:43.476432 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-04-09 04:18:43.518449 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-04-09 04:18:43.519284 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-04-09 04:18:43.557825 | orchestrator | 2026-04-09 04:18:43.557910 | orchestrator | # UPGRADE MANAGER 2026-04-09 04:18:43.557922 | orchestrator | 2026-04-09 04:18:43.557930 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-04-09 04:18:43.557937 | orchestrator | + echo 2026-04-09 04:18:43.557945 | orchestrator | + echo '# UPGRADE MANAGER' 2026-04-09 04:18:43.557952 | orchestrator | + echo 2026-04-09 04:18:43.557958 | orchestrator | + export MANAGER_VERSION=10.0.0 2026-04-09 04:18:43.557965 | orchestrator | + MANAGER_VERSION=10.0.0 2026-04-09 04:18:43.557992 | orchestrator | + CEPH_VERSION=reef 2026-04-09 04:18:43.558000 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-04-09 04:18:43.558006 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-04-09 04:18:43.558013 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0 2026-04-09 04:18:43.564773 | orchestrator | + set -e 2026-04-09 04:18:43.564855 | orchestrator | + VERSION=10.0.0 2026-04-09 04:18:43.564867 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0/g' /opt/configuration/environments/manager/configuration.yml 2026-04-09 04:18:43.572684 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-09 04:18:43.572740 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-09 04:18:43.576894 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-09 04:18:43.581752 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-09 04:18:43.588276 | orchestrator | /opt/configuration ~ 2026-04-09 04:18:43.588322 | orchestrator | + set -e 2026-04-09 04:18:43.588330 | orchestrator | + pushd /opt/configuration 2026-04-09 04:18:43.588336 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-09 04:18:43.588344 | orchestrator | + source /opt/venv/bin/activate 2026-04-09 04:18:43.589427 | orchestrator | ++ deactivate nondestructive 2026-04-09 04:18:43.589510 | orchestrator | ++ '[' -n '' ']' 2026-04-09 04:18:43.589522 | orchestrator | ++ '[' -n '' ']' 2026-04-09 04:18:43.589555 | orchestrator | ++ hash -r 2026-04-09 04:18:43.589562 | orchestrator | ++ '[' -n '' ']' 2026-04-09 04:18:43.589568 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-09 04:18:43.589573 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-09 04:18:43.589579 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-09 04:18:43.589596 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-09 04:18:43.589602 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-09 04:18:43.589608 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-09 04:18:43.589614 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-09 04:18:43.589621 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-09 04:18:43.589631 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-09 04:18:43.589637 | orchestrator | ++ export PATH 2026-04-09 04:18:43.589643 | orchestrator | ++ '[' -n '' ']' 2026-04-09 04:18:43.589649 | orchestrator | ++ '[' -z '' ']' 2026-04-09 04:18:43.589654 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-09 04:18:43.589660 | orchestrator | ++ PS1='(venv) ' 2026-04-09 04:18:43.589666 | orchestrator | ++ export PS1 2026-04-09 04:18:43.589672 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-09 04:18:43.589677 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-09 04:18:43.589683 | orchestrator | ++ hash -r 2026-04-09 04:18:43.589694 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-09 04:18:44.877639 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-09 04:18:44.878559 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-09 04:18:44.879994 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-09 04:18:44.881312 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-09 04:18:44.882517 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-04-09 04:18:44.893076 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-09 04:18:44.894380 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-09 04:18:44.895665 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-09 04:18:44.896760 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-09 04:18:44.939759 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-09 04:18:44.941200 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-09 04:18:44.943014 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-09 04:18:44.944374 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-09 04:18:44.948400 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-09 04:18:45.260866 | orchestrator | ++ which gilt 2026-04-09 04:18:45.264218 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-09 04:18:45.264289 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-09 04:18:45.566687 | orchestrator | osism.cfg-generics: 2026-04-09 04:18:45.681621 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-09 04:18:45.682952 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-09 04:18:45.684093 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-09 04:18:45.684522 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-09 04:18:46.684306 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-09 04:18:46.698543 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-09 04:18:47.251409 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-09 04:18:47.320262 | orchestrator | ~ 2026-04-09 04:18:47.320331 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-09 04:18:47.320338 | orchestrator | + deactivate 2026-04-09 04:18:47.320344 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-09 04:18:47.320350 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-09 04:18:47.320354 | orchestrator | + export PATH 2026-04-09 04:18:47.320358 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-09 04:18:47.320362 | orchestrator | + '[' -n '' ']' 2026-04-09 04:18:47.320366 | orchestrator | + hash -r 2026-04-09 04:18:47.320370 | orchestrator | + '[' -n '' ']' 2026-04-09 04:18:47.320374 | orchestrator | + unset VIRTUAL_ENV 2026-04-09 04:18:47.320378 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-09 04:18:47.320382 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-09 04:18:47.320386 | orchestrator | + unset -f deactivate 2026-04-09 04:18:47.320390 | orchestrator | + popd 2026-04-09 04:18:47.322995 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-04-09 04:18:47.323125 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-04-09 04:18:47.331587 | orchestrator | + set -e 2026-04-09 04:18:47.331642 | orchestrator | + NAMESPACE=kolla/release 2026-04-09 04:18:47.331652 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-09 04:18:47.339311 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-09 04:18:47.347295 | orchestrator | /opt/configuration ~ 2026-04-09 04:18:47.347359 | orchestrator | + set -e 2026-04-09 04:18:47.347374 | orchestrator | + pushd /opt/configuration 2026-04-09 04:18:47.347387 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-09 04:18:47.347400 | orchestrator | + source /opt/venv/bin/activate 2026-04-09 04:18:47.347470 | orchestrator | ++ deactivate nondestructive 2026-04-09 04:18:47.347485 | orchestrator | ++ '[' -n '' ']' 2026-04-09 04:18:47.347524 | orchestrator | ++ '[' -n '' ']' 2026-04-09 04:18:47.347533 | orchestrator | ++ hash -r 2026-04-09 04:18:47.347540 | orchestrator | ++ '[' -n '' ']' 2026-04-09 04:18:47.347547 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-09 04:18:47.347553 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-09 04:18:47.347672 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-09 04:18:47.347684 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-09 04:18:47.347696 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-09 04:18:47.347703 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-09 04:18:47.347713 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-09 04:18:47.347791 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-09 04:18:47.347802 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-09 04:18:47.347814 | orchestrator | ++ export PATH 2026-04-09 04:18:47.347821 | orchestrator | ++ '[' -n '' ']' 2026-04-09 04:18:47.347913 | orchestrator | ++ '[' -z '' ']' 2026-04-09 04:18:47.347923 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-09 04:18:47.347930 | orchestrator | ++ PS1='(venv) ' 2026-04-09 04:18:47.347936 | orchestrator | ++ export PS1 2026-04-09 04:18:47.347943 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-09 04:18:47.347950 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-09 04:18:47.347957 | orchestrator | ++ hash -r 2026-04-09 04:18:47.348153 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-09 04:18:47.911864 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-09 04:18:47.912726 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-09 04:18:47.914094 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-09 04:18:47.915333 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-09 04:18:47.916431 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-04-09 04:18:47.927609 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-09 04:18:47.928765 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-09 04:18:47.929757 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-09 04:18:47.931226 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-09 04:18:47.969061 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-09 04:18:47.970724 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-09 04:18:47.972143 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-09 04:18:47.973828 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-09 04:18:47.977540 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-09 04:18:48.198698 | orchestrator | ++ which gilt 2026-04-09 04:18:48.199848 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-09 04:18:48.199882 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-09 04:18:48.442401 | orchestrator | osism.cfg-generics: 2026-04-09 04:18:48.545647 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-09 04:18:48.545731 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-09 04:18:48.545883 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-09 04:18:48.546178 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-09 04:18:49.088948 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-09 04:18:49.101919 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-09 04:18:49.482154 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-09 04:18:49.535729 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-09 04:18:49.535874 | orchestrator | + deactivate 2026-04-09 04:18:49.535891 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-09 04:18:49.535906 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-09 04:18:49.535918 | orchestrator | + export PATH 2026-04-09 04:18:49.535930 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-09 04:18:49.535942 | orchestrator | + '[' -n '' ']' 2026-04-09 04:18:49.535954 | orchestrator | + hash -r 2026-04-09 04:18:49.535992 | orchestrator | + '[' -n '' ']' 2026-04-09 04:18:49.536006 | orchestrator | + unset VIRTUAL_ENV 2026-04-09 04:18:49.536019 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-09 04:18:49.536031 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-09 04:18:49.536043 | orchestrator | + unset -f deactivate 2026-04-09 04:18:49.536070 | orchestrator | + popd 2026-04-09 04:18:49.536083 | orchestrator | ~ 2026-04-09 04:18:49.538753 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-04-09 04:18:49.608165 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-09 04:18:49.608329 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-09 04:18:49.695105 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-09 04:18:49.695264 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-09 04:18:49.708317 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-09 04:18:49.717491 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-04-09 04:18:49.789141 | orchestrator | ++ '[' -1 -le 0 ']' 2026-04-09 04:18:49.790061 | orchestrator | +++ semver 10.0.0 10.0.0-0 2026-04-09 04:18:49.879421 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-04-09 04:18:49.879509 | orchestrator | ++ echo true 2026-04-09 04:18:49.879656 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-04-09 04:18:49.882928 | orchestrator | +++ semver 2024.2 2024.2 2026-04-09 04:18:49.977996 | orchestrator | ++ '[' 0 -le 0 ']' 2026-04-09 04:18:49.978588 | orchestrator | +++ semver 2024.2 2025.1 2026-04-09 04:18:50.049165 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-04-09 04:18:50.049274 | orchestrator | ++ echo false 2026-04-09 04:18:50.049303 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-04-09 04:18:50.049635 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-09 04:18:50.049656 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-04-09 04:18:50.049816 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-04-09 04:18:50.049927 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-04-09 04:18:50.056492 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-04-09 04:18:50.056586 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-04-09 04:18:50.075736 | orchestrator | export RABBITMQ3TO4=true 2026-04-09 04:18:50.079402 | orchestrator | + osism update manager 2026-04-09 04:18:56.010235 | orchestrator | Collecting uv 2026-04-09 04:18:56.119422 | orchestrator | Downloading uv-0.11.5-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-04-09 04:18:56.141040 | orchestrator | Downloading uv-0.11.5-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (24.8 MB) 2026-04-09 04:18:57.170914 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 24.8/24.8 MB 25.7 MB/s eta 0:00:00 2026-04-09 04:18:57.255597 | orchestrator | Installing collected packages: uv 2026-04-09 04:18:57.729268 | orchestrator | Successfully installed uv-0.11.5 2026-04-09 04:18:58.523488 | orchestrator | Resolved 11 packages in 415ms 2026-04-09 04:18:58.553880 | orchestrator | Downloading cryptography (4.3MiB) 2026-04-09 04:18:58.574315 | orchestrator | Downloading netaddr (2.2MiB) 2026-04-09 04:18:58.574400 | orchestrator | Downloading ansible (54.5MiB) 2026-04-09 04:18:58.574478 | orchestrator | Downloading ansible-core (2.1MiB) 2026-04-09 04:18:58.893410 | orchestrator | Downloaded netaddr 2026-04-09 04:18:58.984558 | orchestrator | Downloaded cryptography 2026-04-09 04:18:59.154230 | orchestrator | Downloaded ansible-core 2026-04-09 04:19:06.399514 | orchestrator | Downloaded ansible 2026-04-09 04:19:06.399733 | orchestrator | Prepared 11 packages in 7.87s 2026-04-09 04:19:06.966679 | orchestrator | Installed 11 packages in 565ms 2026-04-09 04:19:06.966769 | orchestrator | + ansible==11.11.0 2026-04-09 04:19:06.966782 | orchestrator | + ansible-core==2.18.15 2026-04-09 04:19:06.966793 | orchestrator | + cffi==2.0.0 2026-04-09 04:19:06.966805 | orchestrator | + cryptography==46.0.7 2026-04-09 04:19:06.966817 | orchestrator | + jinja2==3.1.6 2026-04-09 04:19:06.966828 | orchestrator | + markupsafe==3.0.3 2026-04-09 04:19:06.966839 | orchestrator | + netaddr==1.3.0 2026-04-09 04:19:06.966850 | orchestrator | + packaging==26.0 2026-04-09 04:19:06.966861 | orchestrator | + pycparser==3.0 2026-04-09 04:19:06.966872 | orchestrator | + pyyaml==6.0.3 2026-04-09 04:19:06.966886 | orchestrator | + resolvelib==1.0.1 2026-04-09 04:19:08.148942 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-203987w5q36503/tmpzmm4anjw/ansible-collection-servicesaqvbc_nb'... 2026-04-09 04:19:09.956244 | orchestrator | Your branch is up to date with 'origin/main'. 2026-04-09 04:19:09.956367 | orchestrator | Already on 'main' 2026-04-09 04:19:10.618458 | orchestrator | Starting galaxy collection install process 2026-04-09 04:19:10.618590 | orchestrator | Process install dependency map 2026-04-09 04:19:10.618615 | orchestrator | Starting collection install process 2026-04-09 04:19:10.618637 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-04-09 04:19:10.618658 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-04-09 04:19:10.618676 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-09 04:19:11.195470 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-204012wv4x4dsl/tmp29adafzb/ansible-playbooks-managermc45g_it'... 2026-04-09 04:19:11.904245 | orchestrator | Your branch is up to date with 'origin/main'. 2026-04-09 04:19:11.904348 | orchestrator | Already on 'main' 2026-04-09 04:19:12.202909 | orchestrator | Starting galaxy collection install process 2026-04-09 04:19:12.203058 | orchestrator | Process install dependency map 2026-04-09 04:19:12.203076 | orchestrator | Starting collection install process 2026-04-09 04:19:12.203089 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-04-09 04:19:12.203102 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-04-09 04:19:12.203114 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-04-09 04:19:12.870214 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-04-09 04:19:12.870310 | orchestrator | -vvvv to see details 2026-04-09 04:19:13.366803 | orchestrator | 2026-04-09 04:19:13.366931 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-04-09 04:19:13.366989 | orchestrator | 2026-04-09 04:19:13.367038 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 04:19:17.591873 | orchestrator | ok: [testbed-manager] 2026-04-09 04:19:17.591962 | orchestrator | 2026-04-09 04:19:17.591978 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-09 04:19:17.666196 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 04:19:17.666305 | orchestrator | 2026-04-09 04:19:17.666320 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-09 04:19:19.486702 | orchestrator | ok: [testbed-manager] 2026-04-09 04:19:19.486837 | orchestrator | 2026-04-09 04:19:19.486853 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-09 04:19:19.553099 | orchestrator | ok: [testbed-manager] 2026-04-09 04:19:19.553192 | orchestrator | 2026-04-09 04:19:19.553208 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-09 04:19:19.632080 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-09 04:19:19.632167 | orchestrator | 2026-04-09 04:19:19.632176 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-09 04:19:24.060264 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-04-09 04:19:24.060322 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-04-09 04:19:24.060328 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-09 04:19:24.060341 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-04-09 04:19:24.060346 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-09 04:19:24.060353 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-09 04:19:24.060359 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-09 04:19:24.060366 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-04-09 04:19:24.060372 | orchestrator | 2026-04-09 04:19:24.060379 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-09 04:19:25.170262 | orchestrator | ok: [testbed-manager] 2026-04-09 04:19:25.170359 | orchestrator | 2026-04-09 04:19:25.170375 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-09 04:19:26.186779 | orchestrator | ok: [testbed-manager] 2026-04-09 04:19:26.186832 | orchestrator | 2026-04-09 04:19:26.186845 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-09 04:19:26.282462 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-09 04:19:26.282587 | orchestrator | 2026-04-09 04:19:26.282603 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-09 04:19:28.195457 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-04-09 04:19:28.195560 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-04-09 04:19:28.195576 | orchestrator | 2026-04-09 04:19:28.195589 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-09 04:19:29.256008 | orchestrator | ok: [testbed-manager] 2026-04-09 04:19:29.256105 | orchestrator | 2026-04-09 04:19:29.256120 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-09 04:19:29.336891 | orchestrator | skipping: [testbed-manager] 2026-04-09 04:19:29.337030 | orchestrator | 2026-04-09 04:19:29.337046 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-09 04:19:29.418154 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-09 04:19:29.418244 | orchestrator | 2026-04-09 04:19:29.418258 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-09 04:19:30.393669 | orchestrator | ok: [testbed-manager] 2026-04-09 04:19:30.393762 | orchestrator | 2026-04-09 04:19:30.393775 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-09 04:19:30.473529 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-09 04:19:30.473606 | orchestrator | 2026-04-09 04:19:30.473616 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-09 04:19:32.501613 | orchestrator | ok: [testbed-manager] => (item=None) 2026-04-09 04:19:32.502579 | orchestrator | ok: [testbed-manager] => (item=None) 2026-04-09 04:19:32.502617 | orchestrator | ok: [testbed-manager] 2026-04-09 04:19:32.502633 | orchestrator | 2026-04-09 04:19:32.502646 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-09 04:19:33.472592 | orchestrator | ok: [testbed-manager] 2026-04-09 04:19:33.472699 | orchestrator | 2026-04-09 04:19:33.472714 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-09 04:19:33.536627 | orchestrator | skipping: [testbed-manager] 2026-04-09 04:19:33.536750 | orchestrator | 2026-04-09 04:19:33.536766 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-09 04:19:33.646275 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-09 04:19:33.646360 | orchestrator | 2026-04-09 04:19:33.646371 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-09 04:19:34.341431 | orchestrator | ok: [testbed-manager] 2026-04-09 04:19:34.341502 | orchestrator | 2026-04-09 04:19:34.341508 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-09 04:19:34.925872 | orchestrator | ok: [testbed-manager] 2026-04-09 04:19:34.925989 | orchestrator | 2026-04-09 04:19:34.926073 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-09 04:19:36.859811 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-04-09 04:19:36.859902 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-04-09 04:19:36.859915 | orchestrator | 2026-04-09 04:19:36.859981 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-09 04:19:38.046404 | orchestrator | changed: [testbed-manager] 2026-04-09 04:19:38.046528 | orchestrator | 2026-04-09 04:19:38.046541 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-09 04:19:38.603735 | orchestrator | ok: [testbed-manager] 2026-04-09 04:19:38.603837 | orchestrator | 2026-04-09 04:19:38.603852 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-09 04:19:39.208654 | orchestrator | ok: [testbed-manager] 2026-04-09 04:19:39.208761 | orchestrator | 2026-04-09 04:19:39.208774 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-09 04:19:39.275460 | orchestrator | skipping: [testbed-manager] 2026-04-09 04:19:39.275569 | orchestrator | 2026-04-09 04:19:39.275583 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-09 04:19:39.347948 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-09 04:19:39.348037 | orchestrator | 2026-04-09 04:19:39.348049 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-09 04:19:39.411662 | orchestrator | ok: [testbed-manager] 2026-04-09 04:19:39.411756 | orchestrator | 2026-04-09 04:19:39.411770 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-09 04:19:42.556526 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-04-09 04:19:42.556664 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-04-09 04:19:42.556694 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-04-09 04:19:42.556712 | orchestrator | 2026-04-09 04:19:42.556732 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-09 04:19:43.647218 | orchestrator | ok: [testbed-manager] 2026-04-09 04:19:43.647320 | orchestrator | 2026-04-09 04:19:43.647335 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-09 04:19:44.772485 | orchestrator | ok: [testbed-manager] 2026-04-09 04:19:44.772557 | orchestrator | 2026-04-09 04:19:44.772567 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-09 04:19:45.827506 | orchestrator | ok: [testbed-manager] 2026-04-09 04:19:45.827608 | orchestrator | 2026-04-09 04:19:45.827624 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-09 04:19:45.918233 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-09 04:19:45.918345 | orchestrator | 2026-04-09 04:19:45.918361 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-09 04:19:45.974251 | orchestrator | ok: [testbed-manager] 2026-04-09 04:19:45.974381 | orchestrator | 2026-04-09 04:19:45.974407 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-09 04:19:47.141444 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-04-09 04:19:47.141547 | orchestrator | 2026-04-09 04:19:47.141564 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-09 04:19:47.236296 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-09 04:19:47.236383 | orchestrator | 2026-04-09 04:19:47.236396 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-09 04:19:48.333144 | orchestrator | ok: [testbed-manager] 2026-04-09 04:19:48.333244 | orchestrator | 2026-04-09 04:19:48.333260 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-09 04:19:49.638487 | orchestrator | ok: [testbed-manager] 2026-04-09 04:19:49.638590 | orchestrator | 2026-04-09 04:19:49.638606 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-09 04:19:49.724316 | orchestrator | skipping: [testbed-manager] 2026-04-09 04:19:49.724411 | orchestrator | 2026-04-09 04:19:49.724424 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-09 04:19:49.796327 | orchestrator | ok: [testbed-manager] 2026-04-09 04:19:49.796422 | orchestrator | 2026-04-09 04:19:49.796436 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-09 04:19:51.268173 | orchestrator | changed: [testbed-manager] 2026-04-09 04:19:51.268256 | orchestrator | 2026-04-09 04:19:51.268268 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-09 04:21:08.116973 | orchestrator | changed: [testbed-manager] 2026-04-09 04:21:08.117081 | orchestrator | 2026-04-09 04:21:08.117096 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-09 04:21:09.509543 | orchestrator | ok: [testbed-manager] 2026-04-09 04:21:09.509648 | orchestrator | 2026-04-09 04:21:09.509666 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-09 04:21:09.581983 | orchestrator | skipping: [testbed-manager] 2026-04-09 04:21:09.582155 | orchestrator | 2026-04-09 04:21:09.582179 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-09 04:21:10.531912 | orchestrator | ok: [testbed-manager] 2026-04-09 04:21:10.532001 | orchestrator | 2026-04-09 04:21:10.532012 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-09 04:21:10.608048 | orchestrator | skipping: [testbed-manager] 2026-04-09 04:21:10.608181 | orchestrator | 2026-04-09 04:21:10.608209 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-09 04:21:10.608230 | orchestrator | 2026-04-09 04:21:10.608249 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-09 04:21:25.550088 | orchestrator | changed: [testbed-manager] 2026-04-09 04:21:25.550178 | orchestrator | 2026-04-09 04:21:25.550187 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-09 04:22:25.615699 | orchestrator | Pausing for 60 seconds 2026-04-09 04:22:25.615903 | orchestrator | changed: [testbed-manager] 2026-04-09 04:22:25.615934 | orchestrator | 2026-04-09 04:22:25.615956 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-04-09 04:22:25.659675 | orchestrator | ok: [testbed-manager] 2026-04-09 04:22:25.659764 | orchestrator | 2026-04-09 04:22:25.659854 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-09 04:22:29.837420 | orchestrator | changed: [testbed-manager] 2026-04-09 04:22:29.837527 | orchestrator | 2026-04-09 04:22:29.837544 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-09 04:23:32.772266 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-09 04:23:32.772378 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-09 04:23:32.772393 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-04-09 04:23:32.772407 | orchestrator | changed: [testbed-manager] 2026-04-09 04:23:32.772420 | orchestrator | 2026-04-09 04:23:32.772432 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-09 04:23:39.264852 | orchestrator | changed: [testbed-manager] 2026-04-09 04:23:39.264974 | orchestrator | 2026-04-09 04:23:39.264992 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-09 04:23:39.372464 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-09 04:23:39.372568 | orchestrator | 2026-04-09 04:23:39.372580 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-09 04:23:39.372589 | orchestrator | 2026-04-09 04:23:39.372597 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-09 04:23:39.466262 | orchestrator | skipping: [testbed-manager] 2026-04-09 04:23:39.466356 | orchestrator | 2026-04-09 04:23:39.466371 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-09 04:23:39.572378 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-09 04:23:39.572497 | orchestrator | 2026-04-09 04:23:39.572514 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-09 04:23:40.674361 | orchestrator | changed: [testbed-manager] 2026-04-09 04:23:40.674513 | orchestrator | 2026-04-09 04:23:40.674542 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-09 04:23:44.306918 | orchestrator | ok: [testbed-manager] 2026-04-09 04:23:44.307023 | orchestrator | 2026-04-09 04:23:44.307040 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-09 04:23:44.403878 | orchestrator | ok: [testbed-manager] => { 2026-04-09 04:23:44.404000 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-09 04:23:44.404018 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-09 04:23:44.404031 | orchestrator | "Checking running containers against expected versions...", 2026-04-09 04:23:44.404044 | orchestrator | "", 2026-04-09 04:23:44.404056 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-09 04:23:44.404067 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-09 04:23:44.404078 | orchestrator | " Enabled: true", 2026-04-09 04:23:44.404089 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-09 04:23:44.404100 | orchestrator | " Status: ✅ MATCH", 2026-04-09 04:23:44.404111 | orchestrator | "", 2026-04-09 04:23:44.404123 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-09 04:23:44.404134 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-09 04:23:44.404145 | orchestrator | " Enabled: true", 2026-04-09 04:23:44.404156 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-09 04:23:44.404167 | orchestrator | " Status: ✅ MATCH", 2026-04-09 04:23:44.404177 | orchestrator | "", 2026-04-09 04:23:44.404188 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-09 04:23:44.404199 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-09 04:23:44.404210 | orchestrator | " Enabled: true", 2026-04-09 04:23:44.404221 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-09 04:23:44.404231 | orchestrator | " Status: ✅ MATCH", 2026-04-09 04:23:44.404242 | orchestrator | "", 2026-04-09 04:23:44.404253 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-09 04:23:44.404264 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-09 04:23:44.404275 | orchestrator | " Enabled: true", 2026-04-09 04:23:44.404286 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-09 04:23:44.404297 | orchestrator | " Status: ✅ MATCH", 2026-04-09 04:23:44.404307 | orchestrator | "", 2026-04-09 04:23:44.404319 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-09 04:23:44.404332 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-09 04:23:44.404344 | orchestrator | " Enabled: true", 2026-04-09 04:23:44.404357 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-09 04:23:44.404369 | orchestrator | " Status: ✅ MATCH", 2026-04-09 04:23:44.404383 | orchestrator | "", 2026-04-09 04:23:44.404395 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-09 04:23:44.404408 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-09 04:23:44.404457 | orchestrator | " Enabled: true", 2026-04-09 04:23:44.404471 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-09 04:23:44.404483 | orchestrator | " Status: ✅ MATCH", 2026-04-09 04:23:44.404496 | orchestrator | "", 2026-04-09 04:23:44.404509 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-09 04:23:44.404522 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-09 04:23:44.404535 | orchestrator | " Enabled: true", 2026-04-09 04:23:44.404548 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-09 04:23:44.404562 | orchestrator | " Status: ✅ MATCH", 2026-04-09 04:23:44.404575 | orchestrator | "", 2026-04-09 04:23:44.404588 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-09 04:23:44.404601 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-09 04:23:44.404614 | orchestrator | " Enabled: true", 2026-04-09 04:23:44.404627 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-09 04:23:44.404640 | orchestrator | " Status: ✅ MATCH", 2026-04-09 04:23:44.404652 | orchestrator | "", 2026-04-09 04:23:44.404665 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-09 04:23:44.404677 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-09 04:23:44.404689 | orchestrator | " Enabled: true", 2026-04-09 04:23:44.404699 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-09 04:23:44.404710 | orchestrator | " Status: ✅ MATCH", 2026-04-09 04:23:44.404721 | orchestrator | "", 2026-04-09 04:23:44.404786 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-09 04:23:44.404799 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-09 04:23:44.404810 | orchestrator | " Enabled: true", 2026-04-09 04:23:44.404821 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-09 04:23:44.404832 | orchestrator | " Status: ✅ MATCH", 2026-04-09 04:23:44.404843 | orchestrator | "", 2026-04-09 04:23:44.404854 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-09 04:23:44.404865 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-09 04:23:44.404876 | orchestrator | " Enabled: true", 2026-04-09 04:23:44.404887 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-09 04:23:44.404898 | orchestrator | " Status: ✅ MATCH", 2026-04-09 04:23:44.404909 | orchestrator | "", 2026-04-09 04:23:44.404919 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-09 04:23:44.404930 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-09 04:23:44.404941 | orchestrator | " Enabled: true", 2026-04-09 04:23:44.404952 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-09 04:23:44.404962 | orchestrator | " Status: ✅ MATCH", 2026-04-09 04:23:44.404973 | orchestrator | "", 2026-04-09 04:23:44.404984 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-09 04:23:44.404995 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-09 04:23:44.405005 | orchestrator | " Enabled: true", 2026-04-09 04:23:44.405016 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-09 04:23:44.405027 | orchestrator | " Status: ✅ MATCH", 2026-04-09 04:23:44.405038 | orchestrator | "", 2026-04-09 04:23:44.405049 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-09 04:23:44.405060 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-09 04:23:44.405071 | orchestrator | " Enabled: true", 2026-04-09 04:23:44.405082 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-09 04:23:44.405111 | orchestrator | " Status: ✅ MATCH", 2026-04-09 04:23:44.405123 | orchestrator | "", 2026-04-09 04:23:44.405134 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-09 04:23:44.405145 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-09 04:23:44.405156 | orchestrator | " Enabled: true", 2026-04-09 04:23:44.405175 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-09 04:23:44.405187 | orchestrator | " Status: ✅ MATCH", 2026-04-09 04:23:44.405197 | orchestrator | "", 2026-04-09 04:23:44.405210 | orchestrator | "=== Summary ===", 2026-04-09 04:23:44.405229 | orchestrator | "Errors (version mismatches): 0", 2026-04-09 04:23:44.405247 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-09 04:23:44.405264 | orchestrator | "", 2026-04-09 04:23:44.405281 | orchestrator | "✅ All running containers match expected versions!" 2026-04-09 04:23:44.405299 | orchestrator | ] 2026-04-09 04:23:44.405315 | orchestrator | } 2026-04-09 04:23:44.405335 | orchestrator | 2026-04-09 04:23:44.405355 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-09 04:23:44.478254 | orchestrator | skipping: [testbed-manager] 2026-04-09 04:23:44.478355 | orchestrator | 2026-04-09 04:23:44.478369 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 04:23:44.478380 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-04-09 04:23:44.478391 | orchestrator | 2026-04-09 04:23:57.578347 | orchestrator | 2026-04-09 04:23:57 | INFO  | Task fc27d1ea-d7d3-4579-a176-39fc93ad4101 (sync inventory) is running in background. Output coming soon. 2026-04-09 04:24:30.170209 | orchestrator | 2026-04-09 04:23:59 | INFO  | Starting group_vars file reorganization 2026-04-09 04:24:30.170349 | orchestrator | 2026-04-09 04:23:59 | INFO  | Moved 0 file(s) to their respective directories 2026-04-09 04:24:30.170368 | orchestrator | 2026-04-09 04:23:59 | INFO  | Group_vars file reorganization completed 2026-04-09 04:24:30.170382 | orchestrator | 2026-04-09 04:24:01 | INFO  | Starting variable preparation from inventory 2026-04-09 04:24:30.170394 | orchestrator | 2026-04-09 04:24:04 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-09 04:24:30.170406 | orchestrator | 2026-04-09 04:24:04 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-09 04:24:30.170417 | orchestrator | 2026-04-09 04:24:05 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-09 04:24:30.170429 | orchestrator | 2026-04-09 04:24:05 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-09 04:24:30.170440 | orchestrator | 2026-04-09 04:24:05 | INFO  | Variable preparation completed 2026-04-09 04:24:30.170451 | orchestrator | 2026-04-09 04:24:06 | INFO  | Starting inventory overwrite handling 2026-04-09 04:24:30.170463 | orchestrator | 2026-04-09 04:24:06 | INFO  | Handling group overwrites in 99-overwrite 2026-04-09 04:24:30.170474 | orchestrator | 2026-04-09 04:24:06 | INFO  | Removing group frr:children from 60-generic 2026-04-09 04:24:30.170485 | orchestrator | 2026-04-09 04:24:06 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-09 04:24:30.170497 | orchestrator | 2026-04-09 04:24:06 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-09 04:24:30.170508 | orchestrator | 2026-04-09 04:24:06 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-09 04:24:30.170520 | orchestrator | 2026-04-09 04:24:06 | INFO  | Handling group overwrites in 20-roles 2026-04-09 04:24:30.170531 | orchestrator | 2026-04-09 04:24:06 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-09 04:24:30.170542 | orchestrator | 2026-04-09 04:24:06 | INFO  | Removed 5 group(s) in total 2026-04-09 04:24:30.170554 | orchestrator | 2026-04-09 04:24:06 | INFO  | Inventory overwrite handling completed 2026-04-09 04:24:30.170565 | orchestrator | 2026-04-09 04:24:08 | INFO  | Starting merge of inventory files 2026-04-09 04:24:30.170576 | orchestrator | 2026-04-09 04:24:08 | INFO  | Inventory files merged successfully 2026-04-09 04:24:30.170587 | orchestrator | 2026-04-09 04:24:13 | INFO  | Generating minified hosts file 2026-04-09 04:24:30.170624 | orchestrator | 2026-04-09 04:24:14 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-09 04:24:30.170648 | orchestrator | 2026-04-09 04:24:14 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-09 04:24:30.170659 | orchestrator | 2026-04-09 04:24:16 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-09 04:24:30.170670 | orchestrator | 2026-04-09 04:24:28 | INFO  | Successfully wrote ClusterShell configuration 2026-04-09 04:24:30.409509 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-09 04:24:30.409592 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-09 04:24:30.409602 | orchestrator | + local max_attempts=60 2026-04-09 04:24:30.409612 | orchestrator | + local name=kolla-ansible 2026-04-09 04:24:30.409621 | orchestrator | + local attempt_num=1 2026-04-09 04:24:30.410419 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-09 04:24:30.446518 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 04:24:30.446596 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-09 04:24:30.446605 | orchestrator | + local max_attempts=60 2026-04-09 04:24:30.446614 | orchestrator | + local name=osism-ansible 2026-04-09 04:24:30.446620 | orchestrator | + local attempt_num=1 2026-04-09 04:24:30.447360 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-09 04:24:30.489182 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 04:24:30.489263 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-09 04:24:30.663206 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-09 04:24:30.663311 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20260322.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-04-09 04:24:30.663322 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20260328.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-04-09 04:24:30.663329 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-04-09 04:24:30.663352 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-04-09 04:24:30.663359 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-04-09 04:24:30.663366 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-04-09 04:24:30.663372 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20260322.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2026-04-09 04:24:30.663379 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 40 seconds ago 2026-04-09 04:24:30.663385 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 3 minutes (healthy) 3306/tcp 2026-04-09 04:24:30.663392 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-04-09 04:24:30.663417 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 3 minutes (healthy) 6379/tcp 2026-04-09 04:24:30.663424 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20260322.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-04-09 04:24:30.663430 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20260320.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-04-09 04:24:30.663437 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20260322.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-04-09 04:24:30.663444 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-04-09 04:24:30.669809 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-04-09 04:24:30.669846 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-04-09 04:24:30.669853 | orchestrator | + osism apply facts 2026-04-09 04:24:42.242681 | orchestrator | 2026-04-09 04:24:42 | INFO  | Prepare task for execution of facts. 2026-04-09 04:24:42.320583 | orchestrator | 2026-04-09 04:24:42 | INFO  | Task 4b32ac61-509a-4de4-8b2b-0a94ba5210ba (facts) was prepared for execution. 2026-04-09 04:24:42.320677 | orchestrator | 2026-04-09 04:24:42 | INFO  | It takes a moment until task 4b32ac61-509a-4de4-8b2b-0a94ba5210ba (facts) has been started and output is visible here. 2026-04-09 04:25:07.974993 | orchestrator | 2026-04-09 04:25:07.975093 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-09 04:25:07.975107 | orchestrator | 2026-04-09 04:25:07.975117 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-09 04:25:07.975127 | orchestrator | Thursday 09 April 2026 04:24:48 +0000 (0:00:02.255) 0:00:02.255 ******** 2026-04-09 04:25:07.975136 | orchestrator | ok: [testbed-manager] 2026-04-09 04:25:07.975146 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:25:07.975155 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:25:07.975164 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:25:07.975173 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:25:07.975181 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:25:07.975190 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:25:07.975199 | orchestrator | 2026-04-09 04:25:07.975207 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-09 04:25:07.975216 | orchestrator | Thursday 09 April 2026 04:24:52 +0000 (0:00:03.987) 0:00:06.242 ******** 2026-04-09 04:25:07.975225 | orchestrator | skipping: [testbed-manager] 2026-04-09 04:25:07.975235 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:25:07.975243 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:25:07.975252 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:25:07.975261 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:25:07.975270 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:25:07.975278 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:25:07.975287 | orchestrator | 2026-04-09 04:25:07.975296 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-09 04:25:07.975305 | orchestrator | 2026-04-09 04:25:07.975314 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 04:25:07.975323 | orchestrator | Thursday 09 April 2026 04:24:55 +0000 (0:00:03.280) 0:00:09.523 ******** 2026-04-09 04:25:07.975332 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:25:07.975341 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:25:07.975350 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:25:07.975358 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:25:07.975367 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:25:07.975376 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:25:07.975407 | orchestrator | ok: [testbed-manager] 2026-04-09 04:25:07.975417 | orchestrator | 2026-04-09 04:25:07.975426 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-09 04:25:07.975435 | orchestrator | 2026-04-09 04:25:07.975444 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-09 04:25:07.975453 | orchestrator | Thursday 09 April 2026 04:25:03 +0000 (0:00:08.380) 0:00:17.903 ******** 2026-04-09 04:25:07.975461 | orchestrator | skipping: [testbed-manager] 2026-04-09 04:25:07.975470 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:25:07.975479 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:25:07.975487 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:25:07.975496 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:25:07.975505 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:25:07.975513 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:25:07.975522 | orchestrator | 2026-04-09 04:25:07.975531 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 04:25:07.975540 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 04:25:07.975552 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 04:25:07.975562 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 04:25:07.975573 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 04:25:07.975583 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 04:25:07.975593 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 04:25:07.975604 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 04:25:07.975614 | orchestrator | 2026-04-09 04:25:07.975624 | orchestrator | 2026-04-09 04:25:07.975635 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 04:25:07.975644 | orchestrator | Thursday 09 April 2026 04:25:07 +0000 (0:00:03.706) 0:00:21.610 ******** 2026-04-09 04:25:07.975653 | orchestrator | =============================================================================== 2026-04-09 04:25:07.975662 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.38s 2026-04-09 04:25:07.975691 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 3.99s 2026-04-09 04:25:07.975701 | orchestrator | Gather facts for all hosts ---------------------------------------------- 3.71s 2026-04-09 04:25:07.975710 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 3.28s 2026-04-09 04:25:08.230933 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-09 04:25:08.322310 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-09 04:25:08.324506 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-04-09 04:25:08.371800 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-04-09 04:25:08.371917 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-04-09 04:25:08.377734 | orchestrator | + set -e 2026-04-09 04:25:08.377800 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-04-09 04:25:08.377824 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-09 04:25:08.384765 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-04-09 04:25:08.394875 | orchestrator | 2026-04-09 04:25:08.394938 | orchestrator | # UPGRADE SERVICES 2026-04-09 04:25:08.394951 | orchestrator | 2026-04-09 04:25:08.394963 | orchestrator | + set -e 2026-04-09 04:25:08.394974 | orchestrator | + echo 2026-04-09 04:25:08.394985 | orchestrator | + echo '# UPGRADE SERVICES' 2026-04-09 04:25:08.395022 | orchestrator | + echo 2026-04-09 04:25:08.395034 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 04:25:08.395880 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 04:25:08.395948 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 04:25:08.395969 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-09 04:25:08.395986 | orchestrator | ++ CEPH_VERSION=reef 2026-04-09 04:25:08.396230 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 04:25:08.396261 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 04:25:08.396338 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-09 04:25:08.396350 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-09 04:25:08.396361 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-09 04:25:08.396372 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-09 04:25:08.396383 | orchestrator | ++ export ARA=false 2026-04-09 04:25:08.396394 | orchestrator | ++ ARA=false 2026-04-09 04:25:08.396409 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 04:25:08.396426 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 04:25:08.396451 | orchestrator | ++ export TEMPEST=false 2026-04-09 04:25:08.396472 | orchestrator | ++ TEMPEST=false 2026-04-09 04:25:08.396490 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 04:25:08.396507 | orchestrator | ++ IS_ZUUL=true 2026-04-09 04:25:08.396526 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 04:25:08.396544 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 04:25:08.396560 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 04:25:08.396578 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 04:25:08.396595 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 04:25:08.396612 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 04:25:08.396631 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 04:25:08.396650 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 04:25:08.396840 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 04:25:08.396868 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 04:25:08.396887 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-09 04:25:08.396905 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-09 04:25:08.396924 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-04-09 04:25:08.396942 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-04-09 04:25:08.396960 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-04-09 04:25:08.407229 | orchestrator | + set -e 2026-04-09 04:25:08.407318 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 04:25:08.407963 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 04:25:08.408061 | orchestrator | ++ INTERACTIVE=false 2026-04-09 04:25:08.408087 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 04:25:08.408102 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 04:25:08.408328 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 04:25:08.408346 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 04:25:08.408357 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 04:25:08.408394 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-09 04:25:08.408406 | orchestrator | ++ CEPH_VERSION=reef 2026-04-09 04:25:08.408417 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 04:25:08.408429 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 04:25:08.408440 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-09 04:25:08.408451 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-09 04:25:08.408462 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-09 04:25:08.408472 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-09 04:25:08.408483 | orchestrator | ++ export ARA=false 2026-04-09 04:25:08.408494 | orchestrator | ++ ARA=false 2026-04-09 04:25:08.408505 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 04:25:08.408516 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 04:25:08.408527 | orchestrator | ++ export TEMPEST=false 2026-04-09 04:25:08.408538 | orchestrator | ++ TEMPEST=false 2026-04-09 04:25:08.408548 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 04:25:08.408559 | orchestrator | ++ IS_ZUUL=true 2026-04-09 04:25:08.408570 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 04:25:08.408581 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 04:25:08.408593 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 04:25:08.408918 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 04:25:08.408949 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 04:25:08.408961 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 04:25:08.408972 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 04:25:08.408983 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 04:25:08.408993 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 04:25:08.409004 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 04:25:08.409015 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-09 04:25:08.409026 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-09 04:25:08.409060 | orchestrator | 2026-04-09 04:25:08.409072 | orchestrator | # PULL IMAGES 2026-04-09 04:25:08.409083 | orchestrator | 2026-04-09 04:25:08.409095 | orchestrator | + echo 2026-04-09 04:25:08.409106 | orchestrator | + echo '# PULL IMAGES' 2026-04-09 04:25:08.409117 | orchestrator | + echo 2026-04-09 04:25:08.410120 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-09 04:25:08.478731 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-09 04:25:08.478820 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-09 04:25:09.677197 | orchestrator | 2026-04-09 04:25:09 | INFO  | Trying to run play pull-images in environment custom 2026-04-09 04:25:19.795221 | orchestrator | 2026-04-09 04:25:19 | INFO  | Prepare task for execution of pull-images. 2026-04-09 04:25:19.894964 | orchestrator | 2026-04-09 04:25:19 | INFO  | Task 94290a24-8edd-4ad1-a0c2-63b127b2d9ef (pull-images) was prepared for execution. 2026-04-09 04:25:19.895074 | orchestrator | 2026-04-09 04:25:19 | INFO  | Task 94290a24-8edd-4ad1-a0c2-63b127b2d9ef is running in background. No more output. Check ARA for logs. 2026-04-09 04:25:20.167875 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-04-09 04:25:20.176627 | orchestrator | + set -e 2026-04-09 04:25:20.176758 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 04:25:20.176776 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 04:25:20.176789 | orchestrator | ++ INTERACTIVE=false 2026-04-09 04:25:20.176800 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 04:25:20.176811 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 04:25:20.176826 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-09 04:25:20.177442 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-09 04:25:20.189867 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-09 04:25:20.189966 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-09 04:25:20.189983 | orchestrator | ++ semver 10.0.0 8.0.3 2026-04-09 04:25:20.249654 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-09 04:25:20.249787 | orchestrator | + osism apply frr 2026-04-09 04:25:31.744133 | orchestrator | 2026-04-09 04:25:31 | INFO  | Prepare task for execution of frr. 2026-04-09 04:25:31.830349 | orchestrator | 2026-04-09 04:25:31 | INFO  | Task 64d722a7-a68b-438c-8c8b-3485c135a0b1 (frr) was prepared for execution. 2026-04-09 04:25:31.830449 | orchestrator | 2026-04-09 04:25:31 | INFO  | It takes a moment until task 64d722a7-a68b-438c-8c8b-3485c135a0b1 (frr) has been started and output is visible here. 2026-04-09 04:26:08.637494 | orchestrator | 2026-04-09 04:26:08.637576 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-09 04:26:08.637586 | orchestrator | 2026-04-09 04:26:08.637592 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-09 04:26:08.637597 | orchestrator | Thursday 09 April 2026 04:25:39 +0000 (0:00:03.278) 0:00:03.278 ******** 2026-04-09 04:26:08.637603 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 04:26:08.637610 | orchestrator | 2026-04-09 04:26:08.637616 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-09 04:26:08.637621 | orchestrator | Thursday 09 April 2026 04:25:41 +0000 (0:00:02.291) 0:00:05.570 ******** 2026-04-09 04:26:08.637627 | orchestrator | ok: [testbed-manager] 2026-04-09 04:26:08.637634 | orchestrator | 2026-04-09 04:26:08.637639 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-09 04:26:08.637645 | orchestrator | Thursday 09 April 2026 04:25:44 +0000 (0:00:02.601) 0:00:08.172 ******** 2026-04-09 04:26:08.637650 | orchestrator | ok: [testbed-manager] 2026-04-09 04:26:08.637655 | orchestrator | 2026-04-09 04:26:08.637668 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-09 04:26:08.637673 | orchestrator | Thursday 09 April 2026 04:25:47 +0000 (0:00:03.048) 0:00:11.220 ******** 2026-04-09 04:26:08.637679 | orchestrator | ok: [testbed-manager] 2026-04-09 04:26:08.637684 | orchestrator | 2026-04-09 04:26:08.637689 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-09 04:26:08.637694 | orchestrator | Thursday 09 April 2026 04:25:49 +0000 (0:00:01.962) 0:00:13.183 ******** 2026-04-09 04:26:08.637714 | orchestrator | ok: [testbed-manager] 2026-04-09 04:26:08.637720 | orchestrator | 2026-04-09 04:26:08.637725 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-09 04:26:08.637731 | orchestrator | Thursday 09 April 2026 04:25:51 +0000 (0:00:01.918) 0:00:15.101 ******** 2026-04-09 04:26:08.637736 | orchestrator | ok: [testbed-manager] 2026-04-09 04:26:08.637741 | orchestrator | 2026-04-09 04:26:08.637747 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-09 04:26:08.637754 | orchestrator | Thursday 09 April 2026 04:25:54 +0000 (0:00:02.744) 0:00:17.846 ******** 2026-04-09 04:26:08.637760 | orchestrator | skipping: [testbed-manager] 2026-04-09 04:26:08.637766 | orchestrator | 2026-04-09 04:26:08.637772 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-09 04:26:08.637777 | orchestrator | Thursday 09 April 2026 04:25:55 +0000 (0:00:01.207) 0:00:19.053 ******** 2026-04-09 04:26:08.637782 | orchestrator | skipping: [testbed-manager] 2026-04-09 04:26:08.637787 | orchestrator | 2026-04-09 04:26:08.637792 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-09 04:26:08.637797 | orchestrator | Thursday 09 April 2026 04:25:56 +0000 (0:00:01.363) 0:00:20.416 ******** 2026-04-09 04:26:08.637803 | orchestrator | skipping: [testbed-manager] 2026-04-09 04:26:08.637808 | orchestrator | 2026-04-09 04:26:08.637813 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-09 04:26:08.637819 | orchestrator | Thursday 09 April 2026 04:25:57 +0000 (0:00:01.215) 0:00:21.632 ******** 2026-04-09 04:26:08.637824 | orchestrator | skipping: [testbed-manager] 2026-04-09 04:26:08.637829 | orchestrator | 2026-04-09 04:26:08.637834 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-09 04:26:08.637840 | orchestrator | Thursday 09 April 2026 04:25:59 +0000 (0:00:01.204) 0:00:22.836 ******** 2026-04-09 04:26:08.637845 | orchestrator | skipping: [testbed-manager] 2026-04-09 04:26:08.637850 | orchestrator | 2026-04-09 04:26:08.637855 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-09 04:26:08.637861 | orchestrator | Thursday 09 April 2026 04:26:00 +0000 (0:00:01.169) 0:00:24.006 ******** 2026-04-09 04:26:08.637866 | orchestrator | ok: [testbed-manager] 2026-04-09 04:26:08.637871 | orchestrator | 2026-04-09 04:26:08.637876 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-09 04:26:08.637881 | orchestrator | Thursday 09 April 2026 04:26:02 +0000 (0:00:01.989) 0:00:25.995 ******** 2026-04-09 04:26:08.637887 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-09 04:26:08.637892 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-09 04:26:08.637899 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-09 04:26:08.637904 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-09 04:26:08.637909 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-09 04:26:08.637914 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-09 04:26:08.637920 | orchestrator | 2026-04-09 04:26:08.637925 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-09 04:26:08.637930 | orchestrator | Thursday 09 April 2026 04:26:05 +0000 (0:00:03.473) 0:00:29.468 ******** 2026-04-09 04:26:08.637935 | orchestrator | ok: [testbed-manager] 2026-04-09 04:26:08.637940 | orchestrator | 2026-04-09 04:26:08.637946 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 04:26:08.637951 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 04:26:08.637956 | orchestrator | 2026-04-09 04:26:08.637961 | orchestrator | 2026-04-09 04:26:08.637970 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 04:26:08.637976 | orchestrator | Thursday 09 April 2026 04:26:08 +0000 (0:00:02.591) 0:00:32.060 ******** 2026-04-09 04:26:08.637981 | orchestrator | =============================================================================== 2026-04-09 04:26:08.637997 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.47s 2026-04-09 04:26:08.638004 | orchestrator | osism.services.frr : Install frr package -------------------------------- 3.05s 2026-04-09 04:26:08.638011 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 2.74s 2026-04-09 04:26:08.638059 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.60s 2026-04-09 04:26:08.638065 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.59s 2026-04-09 04:26:08.638072 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 2.29s 2026-04-09 04:26:08.638078 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.99s 2026-04-09 04:26:08.638085 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.96s 2026-04-09 04:26:08.638091 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.92s 2026-04-09 04:26:08.638097 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 1.36s 2026-04-09 04:26:08.638103 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 1.22s 2026-04-09 04:26:08.638110 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 1.21s 2026-04-09 04:26:08.638116 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 1.20s 2026-04-09 04:26:08.638123 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 1.17s 2026-04-09 04:26:08.891718 | orchestrator | + osism apply kubernetes 2026-04-09 04:26:10.283730 | orchestrator | 2026-04-09 04:26:10 | INFO  | Prepare task for execution of kubernetes. 2026-04-09 04:26:10.382548 | orchestrator | 2026-04-09 04:26:10 | INFO  | Task bb4d850c-8b66-4bcc-a64b-1def9b917cd1 (kubernetes) was prepared for execution. 2026-04-09 04:26:10.382638 | orchestrator | 2026-04-09 04:26:10 | INFO  | It takes a moment until task bb4d850c-8b66-4bcc-a64b-1def9b917cd1 (kubernetes) has been started and output is visible here. 2026-04-09 04:26:53.973719 | orchestrator | 2026-04-09 04:26:53.973831 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-09 04:26:53.973847 | orchestrator | 2026-04-09 04:26:53.973857 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-09 04:26:53.973867 | orchestrator | Thursday 09 April 2026 04:26:16 +0000 (0:00:02.639) 0:00:02.639 ******** 2026-04-09 04:26:53.973877 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:26:53.973887 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:26:53.973896 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:26:53.973905 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:26:53.973913 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:26:53.973922 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:26:53.973931 | orchestrator | 2026-04-09 04:26:53.973949 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-09 04:26:53.973959 | orchestrator | Thursday 09 April 2026 04:26:20 +0000 (0:00:03.900) 0:00:06.540 ******** 2026-04-09 04:26:53.973968 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:26:53.973978 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:26:53.973987 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:26:53.973995 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:26:53.974004 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:26:53.974061 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:26:53.974072 | orchestrator | 2026-04-09 04:26:53.974081 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-09 04:26:53.974090 | orchestrator | Thursday 09 April 2026 04:26:22 +0000 (0:00:02.008) 0:00:08.548 ******** 2026-04-09 04:26:53.974119 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:26:53.974129 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:26:53.974138 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:26:53.974147 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:26:53.974155 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:26:53.974164 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:26:53.974173 | orchestrator | 2026-04-09 04:26:53.974182 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-09 04:26:53.974191 | orchestrator | Thursday 09 April 2026 04:26:24 +0000 (0:00:01.839) 0:00:10.387 ******** 2026-04-09 04:26:53.974200 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:26:53.974208 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:26:53.974275 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:26:53.974287 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:26:53.974298 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:26:53.974308 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:26:53.974318 | orchestrator | 2026-04-09 04:26:53.974329 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-09 04:26:53.974339 | orchestrator | Thursday 09 April 2026 04:26:27 +0000 (0:00:02.759) 0:00:13.147 ******** 2026-04-09 04:26:53.974349 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:26:53.974359 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:26:53.974370 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:26:53.974380 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:26:53.974390 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:26:53.974400 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:26:53.974410 | orchestrator | 2026-04-09 04:26:53.974421 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-09 04:26:53.974431 | orchestrator | Thursday 09 April 2026 04:26:29 +0000 (0:00:02.322) 0:00:15.469 ******** 2026-04-09 04:26:53.974439 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:26:53.974448 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:26:53.974457 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:26:53.974466 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:26:53.974474 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:26:53.974483 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:26:53.974492 | orchestrator | 2026-04-09 04:26:53.974501 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-09 04:26:53.974510 | orchestrator | Thursday 09 April 2026 04:26:32 +0000 (0:00:02.613) 0:00:18.082 ******** 2026-04-09 04:26:53.974519 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:26:53.974528 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:26:53.974536 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:26:53.974545 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:26:53.974554 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:26:53.974563 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:26:53.974571 | orchestrator | 2026-04-09 04:26:53.974580 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-09 04:26:53.974589 | orchestrator | Thursday 09 April 2026 04:26:34 +0000 (0:00:01.944) 0:00:20.028 ******** 2026-04-09 04:26:53.974598 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:26:53.974607 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:26:53.974616 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:26:53.974625 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:26:53.974633 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:26:53.974642 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:26:53.974651 | orchestrator | 2026-04-09 04:26:53.974659 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-09 04:26:53.974668 | orchestrator | Thursday 09 April 2026 04:26:36 +0000 (0:00:02.039) 0:00:22.067 ******** 2026-04-09 04:26:53.974677 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 04:26:53.974686 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 04:26:53.974695 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:26:53.974712 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 04:26:53.974721 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 04:26:53.974730 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:26:53.974739 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 04:26:53.974747 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 04:26:53.974756 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:26:53.974765 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 04:26:53.974774 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 04:26:53.974783 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:26:53.974808 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 04:26:53.974818 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 04:26:53.974827 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:26:53.974836 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 04:26:53.974845 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 04:26:53.974854 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:26:53.974862 | orchestrator | 2026-04-09 04:26:53.974871 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-09 04:26:53.974880 | orchestrator | Thursday 09 April 2026 04:26:38 +0000 (0:00:02.094) 0:00:24.162 ******** 2026-04-09 04:26:53.974889 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:26:53.974898 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:26:53.974906 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:26:53.974915 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:26:53.974924 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:26:53.974932 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:26:53.974941 | orchestrator | 2026-04-09 04:26:53.974950 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-09 04:26:53.974960 | orchestrator | Thursday 09 April 2026 04:26:40 +0000 (0:00:02.206) 0:00:26.369 ******** 2026-04-09 04:26:53.974969 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:26:53.974978 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:26:53.974986 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:26:53.974995 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:26:53.975004 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:26:53.975012 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:26:53.975021 | orchestrator | 2026-04-09 04:26:53.975030 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-09 04:26:53.975038 | orchestrator | Thursday 09 April 2026 04:26:42 +0000 (0:00:01.812) 0:00:28.181 ******** 2026-04-09 04:26:53.975047 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:26:53.975056 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:26:53.975064 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:26:53.975073 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:26:53.975081 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:26:53.975090 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:26:53.975099 | orchestrator | 2026-04-09 04:26:53.975107 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-09 04:26:53.975116 | orchestrator | Thursday 09 April 2026 04:26:45 +0000 (0:00:02.726) 0:00:30.907 ******** 2026-04-09 04:26:53.975125 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:26:53.975134 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:26:53.975143 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:26:53.975155 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:26:53.975164 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:26:53.975173 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:26:53.975182 | orchestrator | 2026-04-09 04:26:53.975191 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-09 04:26:53.975205 | orchestrator | Thursday 09 April 2026 04:26:47 +0000 (0:00:01.831) 0:00:32.739 ******** 2026-04-09 04:26:53.975214 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:26:53.975240 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:26:53.975249 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:26:53.975258 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:26:53.975267 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:26:53.975281 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:26:53.975290 | orchestrator | 2026-04-09 04:26:53.975299 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-09 04:26:53.975309 | orchestrator | Thursday 09 April 2026 04:26:49 +0000 (0:00:02.249) 0:00:34.989 ******** 2026-04-09 04:26:53.975317 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:26:53.975326 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:26:53.975335 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:26:53.975344 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:26:53.975352 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:26:53.975361 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:26:53.975369 | orchestrator | 2026-04-09 04:26:53.975378 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-09 04:26:53.975387 | orchestrator | Thursday 09 April 2026 04:26:51 +0000 (0:00:02.091) 0:00:37.080 ******** 2026-04-09 04:26:53.975396 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-09 04:26:53.975405 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-09 04:26:53.975414 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:26:53.975422 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-09 04:26:53.975431 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-09 04:26:53.975440 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:26:53.975448 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-09 04:26:53.975457 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-09 04:26:53.975466 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:26:53.975474 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-09 04:26:53.975483 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-09 04:26:53.975492 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:26:53.975500 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-09 04:26:53.975509 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-09 04:26:53.975518 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:26:53.975527 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-09 04:26:53.975535 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-09 04:26:53.975544 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:26:53.975553 | orchestrator | 2026-04-09 04:26:53.975561 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-09 04:26:53.975570 | orchestrator | Thursday 09 April 2026 04:26:53 +0000 (0:00:01.953) 0:00:39.033 ******** 2026-04-09 04:26:53.975583 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:26:53.975593 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:26:53.975607 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:29:08.473513 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:29:08.473642 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:29:08.473659 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:29:08.473668 | orchestrator | 2026-04-09 04:29:08.473678 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-09 04:29:08.473688 | orchestrator | Thursday 09 April 2026 04:26:55 +0000 (0:00:01.999) 0:00:41.033 ******** 2026-04-09 04:29:08.473697 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:29:08.473705 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:29:08.473712 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:29:08.473741 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:29:08.473749 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:29:08.473758 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:29:08.473766 | orchestrator | 2026-04-09 04:29:08.473774 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-09 04:29:08.473781 | orchestrator | 2026-04-09 04:29:08.473789 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-09 04:29:08.473798 | orchestrator | Thursday 09 April 2026 04:26:58 +0000 (0:00:03.189) 0:00:44.222 ******** 2026-04-09 04:29:08.473806 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:29:08.473815 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:29:08.473823 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:29:08.473831 | orchestrator | 2026-04-09 04:29:08.473839 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-09 04:29:08.473848 | orchestrator | Thursday 09 April 2026 04:27:02 +0000 (0:00:04.103) 0:00:48.325 ******** 2026-04-09 04:29:08.473856 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:29:08.473864 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:29:08.473871 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:29:08.473879 | orchestrator | 2026-04-09 04:29:08.473887 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-09 04:29:08.473894 | orchestrator | Thursday 09 April 2026 04:27:05 +0000 (0:00:02.977) 0:00:51.303 ******** 2026-04-09 04:29:08.473903 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:29:08.473911 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:29:08.473919 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:29:08.473927 | orchestrator | 2026-04-09 04:29:08.473935 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-09 04:29:08.473944 | orchestrator | Thursday 09 April 2026 04:27:07 +0000 (0:00:02.248) 0:00:53.551 ******** 2026-04-09 04:29:08.473952 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:29:08.473960 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:29:08.473968 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:29:08.473977 | orchestrator | 2026-04-09 04:29:08.473985 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-09 04:29:08.473994 | orchestrator | Thursday 09 April 2026 04:27:09 +0000 (0:00:01.992) 0:00:55.544 ******** 2026-04-09 04:29:08.474003 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:29:08.474012 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:29:08.474099 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:29:08.474109 | orchestrator | 2026-04-09 04:29:08.474118 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-09 04:29:08.474127 | orchestrator | Thursday 09 April 2026 04:27:11 +0000 (0:00:01.636) 0:00:57.181 ******** 2026-04-09 04:29:08.474135 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:29:08.474142 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:29:08.474149 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:29:08.474156 | orchestrator | 2026-04-09 04:29:08.474164 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-09 04:29:08.474172 | orchestrator | Thursday 09 April 2026 04:27:13 +0000 (0:00:02.054) 0:00:59.235 ******** 2026-04-09 04:29:08.474180 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:29:08.474187 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:29:08.474194 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:29:08.474202 | orchestrator | 2026-04-09 04:29:08.474210 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-09 04:29:08.474218 | orchestrator | Thursday 09 April 2026 04:27:15 +0000 (0:00:02.388) 0:01:01.623 ******** 2026-04-09 04:29:08.474227 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:29:08.474235 | orchestrator | 2026-04-09 04:29:08.474243 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-09 04:29:08.474252 | orchestrator | Thursday 09 April 2026 04:27:17 +0000 (0:00:01.823) 0:01:03.446 ******** 2026-04-09 04:29:08.474270 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:29:08.474278 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:29:08.474286 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:29:08.474294 | orchestrator | 2026-04-09 04:29:08.474302 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-09 04:29:08.474310 | orchestrator | Thursday 09 April 2026 04:27:20 +0000 (0:00:02.721) 0:01:06.168 ******** 2026-04-09 04:29:08.474318 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:29:08.474326 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:29:08.474333 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:29:08.474341 | orchestrator | 2026-04-09 04:29:08.474349 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-09 04:29:08.474356 | orchestrator | Thursday 09 April 2026 04:27:22 +0000 (0:00:01.617) 0:01:07.785 ******** 2026-04-09 04:29:08.474363 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:29:08.474371 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:29:08.474379 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:29:08.474387 | orchestrator | 2026-04-09 04:29:08.474395 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-09 04:29:08.474403 | orchestrator | Thursday 09 April 2026 04:27:23 +0000 (0:00:01.818) 0:01:09.604 ******** 2026-04-09 04:29:08.474410 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:29:08.474418 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:29:08.474426 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:29:08.474434 | orchestrator | 2026-04-09 04:29:08.474441 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-09 04:29:08.474448 | orchestrator | Thursday 09 April 2026 04:27:26 +0000 (0:00:02.501) 0:01:12.105 ******** 2026-04-09 04:29:08.474456 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:29:08.474464 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:29:08.474492 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:29:08.474500 | orchestrator | 2026-04-09 04:29:08.474508 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-09 04:29:08.474516 | orchestrator | Thursday 09 April 2026 04:27:27 +0000 (0:00:01.516) 0:01:13.622 ******** 2026-04-09 04:29:08.474523 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:29:08.474531 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:29:08.474539 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:29:08.474546 | orchestrator | 2026-04-09 04:29:08.474555 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-09 04:29:08.474562 | orchestrator | Thursday 09 April 2026 04:27:29 +0000 (0:00:01.465) 0:01:15.088 ******** 2026-04-09 04:29:08.474570 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:29:08.474578 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:29:08.474586 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:29:08.474593 | orchestrator | 2026-04-09 04:29:08.474600 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-09 04:29:08.474623 | orchestrator | Thursday 09 April 2026 04:27:31 +0000 (0:00:02.542) 0:01:17.630 ******** 2026-04-09 04:29:08.474632 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:29:08.474640 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:29:08.474648 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:29:08.474656 | orchestrator | 2026-04-09 04:29:08.474664 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-09 04:29:08.474672 | orchestrator | Thursday 09 April 2026 04:27:34 +0000 (0:00:02.430) 0:01:20.061 ******** 2026-04-09 04:29:08.474680 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:29:08.474688 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:29:08.474695 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:29:08.474703 | orchestrator | 2026-04-09 04:29:08.474711 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-09 04:29:08.474719 | orchestrator | Thursday 09 April 2026 04:27:35 +0000 (0:00:01.495) 0:01:21.557 ******** 2026-04-09 04:29:08.474727 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-09 04:29:08.474744 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-09 04:29:08.474753 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-09 04:29:08.474761 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-09 04:29:08.474770 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-09 04:29:08.474777 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-09 04:29:08.474785 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:29:08.474792 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:29:08.474801 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:29:08.474809 | orchestrator | 2026-04-09 04:29:08.474817 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-09 04:29:08.474826 | orchestrator | Thursday 09 April 2026 04:28:00 +0000 (0:00:24.185) 0:01:45.743 ******** 2026-04-09 04:29:08.474833 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:29:08.474842 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:29:08.474849 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:29:08.474857 | orchestrator | 2026-04-09 04:29:08.474864 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-09 04:29:08.474872 | orchestrator | Thursday 09 April 2026 04:28:01 +0000 (0:00:01.848) 0:01:47.592 ******** 2026-04-09 04:29:08.474880 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:29:08.474889 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:29:08.474897 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:29:08.474905 | orchestrator | 2026-04-09 04:29:08.474913 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-09 04:29:08.474922 | orchestrator | Thursday 09 April 2026 04:28:04 +0000 (0:00:03.045) 0:01:50.637 ******** 2026-04-09 04:29:08.474930 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:29:08.474938 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:29:08.474945 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:29:08.474953 | orchestrator | 2026-04-09 04:29:08.474962 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-09 04:29:08.474970 | orchestrator | Thursday 09 April 2026 04:28:07 +0000 (0:00:02.395) 0:01:53.033 ******** 2026-04-09 04:29:08.474978 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:29:08.474987 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:29:08.474995 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:29:08.475004 | orchestrator | 2026-04-09 04:29:08.475012 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-09 04:29:08.475019 | orchestrator | Thursday 09 April 2026 04:29:04 +0000 (0:00:56.859) 0:02:49.892 ******** 2026-04-09 04:29:08.475027 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:29:08.475035 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:29:08.475043 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:29:08.475052 | orchestrator | 2026-04-09 04:29:08.475082 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-09 04:29:08.475091 | orchestrator | Thursday 09 April 2026 04:29:05 +0000 (0:00:01.736) 0:02:51.629 ******** 2026-04-09 04:29:08.475099 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:29:08.475106 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:29:08.475114 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:29:08.475122 | orchestrator | 2026-04-09 04:29:08.475130 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-09 04:29:08.475139 | orchestrator | Thursday 09 April 2026 04:29:07 +0000 (0:00:01.781) 0:02:53.411 ******** 2026-04-09 04:29:08.475152 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:29:08.475168 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:29:08.475177 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:29:08.475185 | orchestrator | 2026-04-09 04:29:08.475200 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-09 04:30:00.550758 | orchestrator | Thursday 09 April 2026 04:29:09 +0000 (0:00:01.725) 0:02:55.137 ******** 2026-04-09 04:30:00.550848 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:30:00.550857 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:30:00.550862 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:30:00.550868 | orchestrator | 2026-04-09 04:30:00.550874 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-09 04:30:00.550880 | orchestrator | Thursday 09 April 2026 04:29:11 +0000 (0:00:01.780) 0:02:56.917 ******** 2026-04-09 04:30:00.550885 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:30:00.550890 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:30:00.550896 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:30:00.550901 | orchestrator | 2026-04-09 04:30:00.550906 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-09 04:30:00.550911 | orchestrator | Thursday 09 April 2026 04:29:12 +0000 (0:00:01.726) 0:02:58.643 ******** 2026-04-09 04:30:00.550917 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:30:00.550923 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:30:00.550928 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:30:00.550933 | orchestrator | 2026-04-09 04:30:00.550938 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-09 04:30:00.550944 | orchestrator | Thursday 09 April 2026 04:29:14 +0000 (0:00:01.750) 0:03:00.394 ******** 2026-04-09 04:30:00.550949 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:30:00.550954 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:30:00.550959 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:30:00.550964 | orchestrator | 2026-04-09 04:30:00.550969 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-09 04:30:00.550975 | orchestrator | Thursday 09 April 2026 04:29:16 +0000 (0:00:01.770) 0:03:02.164 ******** 2026-04-09 04:30:00.550980 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:30:00.550985 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:30:00.550990 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:30:00.550995 | orchestrator | 2026-04-09 04:30:00.551001 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-09 04:30:00.551006 | orchestrator | Thursday 09 April 2026 04:29:18 +0000 (0:00:01.932) 0:03:04.097 ******** 2026-04-09 04:30:00.551011 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:30:00.551017 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:30:00.551022 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:30:00.551027 | orchestrator | 2026-04-09 04:30:00.551032 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-09 04:30:00.551037 | orchestrator | Thursday 09 April 2026 04:29:20 +0000 (0:00:02.284) 0:03:06.382 ******** 2026-04-09 04:30:00.551042 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:30:00.551048 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:30:00.551053 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:30:00.551058 | orchestrator | 2026-04-09 04:30:00.551063 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-09 04:30:00.551120 | orchestrator | Thursday 09 April 2026 04:29:22 +0000 (0:00:01.442) 0:03:07.825 ******** 2026-04-09 04:30:00.551126 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:30:00.551132 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:30:00.551137 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:30:00.551142 | orchestrator | 2026-04-09 04:30:00.551147 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-09 04:30:00.551153 | orchestrator | Thursday 09 April 2026 04:29:23 +0000 (0:00:01.370) 0:03:09.195 ******** 2026-04-09 04:30:00.551158 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:30:00.551163 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:30:00.551169 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:30:00.551192 | orchestrator | 2026-04-09 04:30:00.551198 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-09 04:30:00.551203 | orchestrator | Thursday 09 April 2026 04:29:25 +0000 (0:00:01.737) 0:03:10.933 ******** 2026-04-09 04:30:00.551208 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:30:00.551213 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:30:00.551218 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:30:00.551224 | orchestrator | 2026-04-09 04:30:00.551230 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-09 04:30:00.551237 | orchestrator | Thursday 09 April 2026 04:29:26 +0000 (0:00:01.716) 0:03:12.649 ******** 2026-04-09 04:30:00.551243 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-09 04:30:00.551248 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-09 04:30:00.551253 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-09 04:30:00.551259 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-09 04:30:00.551264 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-09 04:30:00.551269 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-09 04:30:00.551274 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-09 04:30:00.551280 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-09 04:30:00.551285 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-09 04:30:00.551290 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-09 04:30:00.551296 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-09 04:30:00.551314 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-09 04:30:00.551331 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-09 04:30:00.551338 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-09 04:30:00.551344 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-09 04:30:00.551351 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-09 04:30:00.551357 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-09 04:30:00.551364 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-09 04:30:00.551370 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-09 04:30:00.551377 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-09 04:30:00.551383 | orchestrator | 2026-04-09 04:30:00.551389 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-09 04:30:00.551396 | orchestrator | 2026-04-09 04:30:00.551402 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-09 04:30:00.551408 | orchestrator | Thursday 09 April 2026 04:29:31 +0000 (0:00:04.609) 0:03:17.259 ******** 2026-04-09 04:30:00.551415 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:30:00.551421 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:30:00.551427 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:30:00.551433 | orchestrator | 2026-04-09 04:30:00.551439 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-09 04:30:00.551445 | orchestrator | Thursday 09 April 2026 04:29:33 +0000 (0:00:01.815) 0:03:19.075 ******** 2026-04-09 04:30:00.551456 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:30:00.551462 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:30:00.551468 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:30:00.551475 | orchestrator | 2026-04-09 04:30:00.551481 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-09 04:30:00.551487 | orchestrator | Thursday 09 April 2026 04:29:35 +0000 (0:00:02.549) 0:03:21.624 ******** 2026-04-09 04:30:00.551493 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:30:00.551499 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:30:00.551506 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:30:00.551512 | orchestrator | 2026-04-09 04:30:00.551518 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-09 04:30:00.551524 | orchestrator | Thursday 09 April 2026 04:29:37 +0000 (0:00:01.363) 0:03:22.987 ******** 2026-04-09 04:30:00.551531 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 04:30:00.551537 | orchestrator | 2026-04-09 04:30:00.551543 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-09 04:30:00.551550 | orchestrator | Thursday 09 April 2026 04:29:39 +0000 (0:00:01.960) 0:03:24.948 ******** 2026-04-09 04:30:00.551556 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:30:00.551562 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:30:00.551568 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:30:00.551574 | orchestrator | 2026-04-09 04:30:00.551580 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-09 04:30:00.551587 | orchestrator | Thursday 09 April 2026 04:29:40 +0000 (0:00:01.360) 0:03:26.308 ******** 2026-04-09 04:30:00.551593 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:30:00.551599 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:30:00.551606 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:30:00.551612 | orchestrator | 2026-04-09 04:30:00.551618 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-09 04:30:00.551625 | orchestrator | Thursday 09 April 2026 04:29:42 +0000 (0:00:01.359) 0:03:27.668 ******** 2026-04-09 04:30:00.551631 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:30:00.551637 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:30:00.551643 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:30:00.551649 | orchestrator | 2026-04-09 04:30:00.551656 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-09 04:30:00.551662 | orchestrator | Thursday 09 April 2026 04:29:43 +0000 (0:00:01.341) 0:03:29.010 ******** 2026-04-09 04:30:00.551668 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:30:00.551674 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:30:00.551681 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:30:00.551687 | orchestrator | 2026-04-09 04:30:00.551694 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-09 04:30:00.551700 | orchestrator | Thursday 09 April 2026 04:29:45 +0000 (0:00:01.715) 0:03:30.725 ******** 2026-04-09 04:30:00.551706 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:30:00.551712 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:30:00.551719 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:30:00.551725 | orchestrator | 2026-04-09 04:30:00.551730 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-09 04:30:00.551735 | orchestrator | Thursday 09 April 2026 04:29:47 +0000 (0:00:02.219) 0:03:32.944 ******** 2026-04-09 04:30:00.551741 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:30:00.551746 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:30:00.551751 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:30:00.551756 | orchestrator | 2026-04-09 04:30:00.551761 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-09 04:30:00.551767 | orchestrator | Thursday 09 April 2026 04:29:49 +0000 (0:00:02.421) 0:03:35.366 ******** 2026-04-09 04:30:00.551772 | orchestrator | changed: [testbed-node-3] 2026-04-09 04:30:00.551777 | orchestrator | changed: [testbed-node-4] 2026-04-09 04:30:00.551786 | orchestrator | changed: [testbed-node-5] 2026-04-09 04:30:00.551792 | orchestrator | 2026-04-09 04:30:00.551797 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-09 04:30:00.551802 | orchestrator | 2026-04-09 04:30:00.551807 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-09 04:30:00.551816 | orchestrator | Thursday 09 April 2026 04:29:58 +0000 (0:00:08.468) 0:03:43.834 ******** 2026-04-09 04:30:00.551821 | orchestrator | ok: [testbed-manager] 2026-04-09 04:30:00.551827 | orchestrator | 2026-04-09 04:30:00.551832 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-09 04:30:00.551841 | orchestrator | Thursday 09 April 2026 04:30:00 +0000 (0:00:02.362) 0:03:46.197 ******** 2026-04-09 04:31:13.194874 | orchestrator | ok: [testbed-manager] 2026-04-09 04:31:13.194991 | orchestrator | 2026-04-09 04:31:13.195008 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-09 04:31:13.195021 | orchestrator | Thursday 09 April 2026 04:30:02 +0000 (0:00:01.598) 0:03:47.795 ******** 2026-04-09 04:31:13.195033 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-09 04:31:13.195045 | orchestrator | 2026-04-09 04:31:13.195056 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-09 04:31:13.195067 | orchestrator | Thursday 09 April 2026 04:30:03 +0000 (0:00:01.692) 0:03:49.487 ******** 2026-04-09 04:31:13.195078 | orchestrator | changed: [testbed-manager] 2026-04-09 04:31:13.195090 | orchestrator | 2026-04-09 04:31:13.195102 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-09 04:31:13.195113 | orchestrator | Thursday 09 April 2026 04:30:06 +0000 (0:00:02.206) 0:03:51.694 ******** 2026-04-09 04:31:13.195171 | orchestrator | changed: [testbed-manager] 2026-04-09 04:31:13.195183 | orchestrator | 2026-04-09 04:31:13.195195 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-09 04:31:13.195206 | orchestrator | Thursday 09 April 2026 04:30:07 +0000 (0:00:01.692) 0:03:53.387 ******** 2026-04-09 04:31:13.195217 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-09 04:31:13.195228 | orchestrator | 2026-04-09 04:31:13.195239 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-09 04:31:13.195249 | orchestrator | Thursday 09 April 2026 04:30:10 +0000 (0:00:02.899) 0:03:56.286 ******** 2026-04-09 04:31:13.195261 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-09 04:31:13.195272 | orchestrator | 2026-04-09 04:31:13.195283 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-09 04:31:13.195294 | orchestrator | Thursday 09 April 2026 04:30:12 +0000 (0:00:01.886) 0:03:58.173 ******** 2026-04-09 04:31:13.195305 | orchestrator | ok: [testbed-manager] 2026-04-09 04:31:13.195316 | orchestrator | 2026-04-09 04:31:13.195327 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-09 04:31:13.195338 | orchestrator | Thursday 09 April 2026 04:30:14 +0000 (0:00:01.505) 0:03:59.678 ******** 2026-04-09 04:31:13.195349 | orchestrator | ok: [testbed-manager] 2026-04-09 04:31:13.195360 | orchestrator | 2026-04-09 04:31:13.195371 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-09 04:31:13.195382 | orchestrator | 2026-04-09 04:31:13.195393 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-09 04:31:13.195404 | orchestrator | Thursday 09 April 2026 04:30:16 +0000 (0:00:02.048) 0:04:01.727 ******** 2026-04-09 04:31:13.195417 | orchestrator | ok: [testbed-manager] 2026-04-09 04:31:13.195431 | orchestrator | 2026-04-09 04:31:13.195443 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-09 04:31:13.195457 | orchestrator | Thursday 09 April 2026 04:30:17 +0000 (0:00:01.204) 0:04:02.932 ******** 2026-04-09 04:31:13.195470 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 04:31:13.195483 | orchestrator | 2026-04-09 04:31:13.195496 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-09 04:31:13.195510 | orchestrator | Thursday 09 April 2026 04:30:19 +0000 (0:00:01.748) 0:04:04.680 ******** 2026-04-09 04:31:13.195548 | orchestrator | ok: [testbed-manager] 2026-04-09 04:31:13.195562 | orchestrator | 2026-04-09 04:31:13.195575 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-09 04:31:13.195588 | orchestrator | Thursday 09 April 2026 04:30:20 +0000 (0:00:01.970) 0:04:06.651 ******** 2026-04-09 04:31:13.195601 | orchestrator | ok: [testbed-manager] 2026-04-09 04:31:13.195614 | orchestrator | 2026-04-09 04:31:13.195626 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-09 04:31:13.195638 | orchestrator | Thursday 09 April 2026 04:30:23 +0000 (0:00:02.914) 0:04:09.566 ******** 2026-04-09 04:31:13.195652 | orchestrator | ok: [testbed-manager] 2026-04-09 04:31:13.195665 | orchestrator | 2026-04-09 04:31:13.195678 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-09 04:31:13.195691 | orchestrator | Thursday 09 April 2026 04:30:25 +0000 (0:00:01.591) 0:04:11.158 ******** 2026-04-09 04:31:13.195704 | orchestrator | ok: [testbed-manager] 2026-04-09 04:31:13.195717 | orchestrator | 2026-04-09 04:31:13.195730 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-09 04:31:13.195742 | orchestrator | Thursday 09 April 2026 04:30:27 +0000 (0:00:01.545) 0:04:12.704 ******** 2026-04-09 04:31:13.195755 | orchestrator | ok: [testbed-manager] 2026-04-09 04:31:13.195768 | orchestrator | 2026-04-09 04:31:13.195780 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-09 04:31:13.195791 | orchestrator | Thursday 09 April 2026 04:30:28 +0000 (0:00:01.705) 0:04:14.410 ******** 2026-04-09 04:31:13.195802 | orchestrator | ok: [testbed-manager] 2026-04-09 04:31:13.195813 | orchestrator | 2026-04-09 04:31:13.195824 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-09 04:31:13.195835 | orchestrator | Thursday 09 April 2026 04:30:31 +0000 (0:00:02.853) 0:04:17.263 ******** 2026-04-09 04:31:13.195845 | orchestrator | ok: [testbed-manager] 2026-04-09 04:31:13.195856 | orchestrator | 2026-04-09 04:31:13.195867 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-09 04:31:13.195878 | orchestrator | 2026-04-09 04:31:13.195889 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-09 04:31:13.195899 | orchestrator | Thursday 09 April 2026 04:30:33 +0000 (0:00:02.211) 0:04:19.474 ******** 2026-04-09 04:31:13.195910 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:31:13.195921 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:31:13.195932 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:31:13.195943 | orchestrator | 2026-04-09 04:31:13.195954 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-09 04:31:13.195965 | orchestrator | Thursday 09 April 2026 04:30:35 +0000 (0:00:01.477) 0:04:20.952 ******** 2026-04-09 04:31:13.195976 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:31:13.195987 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:31:13.195997 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:31:13.196009 | orchestrator | 2026-04-09 04:31:13.196055 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-09 04:31:13.196069 | orchestrator | Thursday 09 April 2026 04:30:36 +0000 (0:00:01.506) 0:04:22.458 ******** 2026-04-09 04:31:13.196080 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:31:13.196091 | orchestrator | 2026-04-09 04:31:13.196102 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-09 04:31:13.196113 | orchestrator | Thursday 09 April 2026 04:30:38 +0000 (0:00:02.036) 0:04:24.495 ******** 2026-04-09 04:31:13.196182 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-09 04:31:13.196195 | orchestrator | 2026-04-09 04:31:13.196206 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-09 04:31:13.196217 | orchestrator | Thursday 09 April 2026 04:30:40 +0000 (0:00:01.877) 0:04:26.372 ******** 2026-04-09 04:31:13.196228 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 04:31:13.196248 | orchestrator | 2026-04-09 04:31:13.196259 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-09 04:31:13.196270 | orchestrator | Thursday 09 April 2026 04:30:42 +0000 (0:00:01.834) 0:04:28.207 ******** 2026-04-09 04:31:13.196281 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:31:13.196292 | orchestrator | 2026-04-09 04:31:13.196302 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-09 04:31:13.196313 | orchestrator | Thursday 09 April 2026 04:30:43 +0000 (0:00:01.165) 0:04:29.373 ******** 2026-04-09 04:31:13.196324 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 04:31:13.196335 | orchestrator | 2026-04-09 04:31:13.196346 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-09 04:31:13.196357 | orchestrator | Thursday 09 April 2026 04:30:45 +0000 (0:00:02.170) 0:04:31.543 ******** 2026-04-09 04:31:13.196368 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 04:31:13.196379 | orchestrator | 2026-04-09 04:31:13.196390 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-09 04:31:13.196401 | orchestrator | Thursday 09 April 2026 04:30:48 +0000 (0:00:02.386) 0:04:33.930 ******** 2026-04-09 04:31:13.196412 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 04:31:13.196423 | orchestrator | 2026-04-09 04:31:13.196434 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-09 04:31:13.196445 | orchestrator | Thursday 09 April 2026 04:30:49 +0000 (0:00:01.160) 0:04:35.090 ******** 2026-04-09 04:31:13.196456 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 04:31:13.196466 | orchestrator | 2026-04-09 04:31:13.196477 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-09 04:31:13.196488 | orchestrator | Thursday 09 April 2026 04:30:50 +0000 (0:00:01.148) 0:04:36.238 ******** 2026-04-09 04:31:13.196499 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-04-09 04:31:13.196510 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-04-09 04:31:13.196523 | orchestrator | } 2026-04-09 04:31:13.196535 | orchestrator | 2026-04-09 04:31:13.196546 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-09 04:31:13.196557 | orchestrator | Thursday 09 April 2026 04:30:51 +0000 (0:00:01.189) 0:04:37.428 ******** 2026-04-09 04:31:13.196567 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:31:13.196578 | orchestrator | 2026-04-09 04:31:13.196589 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-09 04:31:13.196600 | orchestrator | Thursday 09 April 2026 04:30:52 +0000 (0:00:01.178) 0:04:38.607 ******** 2026-04-09 04:31:13.196611 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-09 04:31:13.196622 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-09 04:31:13.196633 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-09 04:31:13.196644 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-09 04:31:13.196655 | orchestrator | 2026-04-09 04:31:13.196666 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-09 04:31:13.196677 | orchestrator | Thursday 09 April 2026 04:30:58 +0000 (0:00:06.016) 0:04:44.623 ******** 2026-04-09 04:31:13.196688 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 04:31:13.196698 | orchestrator | 2026-04-09 04:31:13.196709 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-09 04:31:13.196720 | orchestrator | Thursday 09 April 2026 04:31:01 +0000 (0:00:02.474) 0:04:47.098 ******** 2026-04-09 04:31:13.196731 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-09 04:31:13.196741 | orchestrator | 2026-04-09 04:31:13.196752 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-09 04:31:13.196763 | orchestrator | Thursday 09 April 2026 04:31:04 +0000 (0:00:02.726) 0:04:49.825 ******** 2026-04-09 04:31:13.196774 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-09 04:31:13.196792 | orchestrator | 2026-04-09 04:31:13.196804 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-09 04:31:13.196814 | orchestrator | Thursday 09 April 2026 04:31:08 +0000 (0:00:04.444) 0:04:54.270 ******** 2026-04-09 04:31:13.196825 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:31:13.196836 | orchestrator | 2026-04-09 04:31:13.196847 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-09 04:31:13.196858 | orchestrator | Thursday 09 April 2026 04:31:09 +0000 (0:00:01.193) 0:04:55.464 ******** 2026-04-09 04:31:13.196869 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-09 04:31:13.196886 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-09 04:31:13.196897 | orchestrator | 2026-04-09 04:31:13.196908 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-09 04:31:13.196919 | orchestrator | Thursday 09 April 2026 04:31:12 +0000 (0:00:03.116) 0:04:58.581 ******** 2026-04-09 04:31:13.196930 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:31:13.196949 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:31:42.820214 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:31:42.820332 | orchestrator | 2026-04-09 04:31:42.820357 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-09 04:31:42.820377 | orchestrator | Thursday 09 April 2026 04:31:14 +0000 (0:00:01.444) 0:05:00.025 ******** 2026-04-09 04:31:42.820394 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:31:42.820412 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:31:42.820430 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:31:42.820448 | orchestrator | 2026-04-09 04:31:42.820465 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-09 04:31:42.820483 | orchestrator | 2026-04-09 04:31:42.820500 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-09 04:31:42.820517 | orchestrator | Thursday 09 April 2026 04:31:16 +0000 (0:00:02.483) 0:05:02.509 ******** 2026-04-09 04:31:42.820535 | orchestrator | ok: [testbed-manager] 2026-04-09 04:31:42.820553 | orchestrator | 2026-04-09 04:31:42.820570 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-09 04:31:42.820587 | orchestrator | Thursday 09 April 2026 04:31:18 +0000 (0:00:01.234) 0:05:03.744 ******** 2026-04-09 04:31:42.820605 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 04:31:42.820623 | orchestrator | 2026-04-09 04:31:42.820637 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-09 04:31:42.820648 | orchestrator | Thursday 09 April 2026 04:31:19 +0000 (0:00:01.550) 0:05:05.295 ******** 2026-04-09 04:31:42.820658 | orchestrator | ok: [testbed-manager] 2026-04-09 04:31:42.820676 | orchestrator | 2026-04-09 04:31:42.820693 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-09 04:31:42.820710 | orchestrator | 2026-04-09 04:31:42.820727 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-09 04:31:42.820745 | orchestrator | Thursday 09 April 2026 04:31:25 +0000 (0:00:06.270) 0:05:11.565 ******** 2026-04-09 04:31:42.820763 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:31:42.820779 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:31:42.820796 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:31:42.820814 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:31:42.820831 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:31:42.820848 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:31:42.820861 | orchestrator | 2026-04-09 04:31:42.820873 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-09 04:31:42.820883 | orchestrator | Thursday 09 April 2026 04:31:27 +0000 (0:00:01.909) 0:05:13.475 ******** 2026-04-09 04:31:42.820893 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-09 04:31:42.820902 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-09 04:31:42.820942 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-09 04:31:42.820953 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-09 04:31:42.820962 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-09 04:31:42.820972 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-09 04:31:42.820981 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-09 04:31:42.820991 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-09 04:31:42.821001 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-09 04:31:42.821010 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-09 04:31:42.821019 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-09 04:31:42.821029 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-09 04:31:42.821038 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-09 04:31:42.821048 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-09 04:31:42.821057 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-09 04:31:42.821067 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-09 04:31:42.821076 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-09 04:31:42.821086 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-09 04:31:42.821095 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-09 04:31:42.821105 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-09 04:31:42.821132 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-09 04:31:42.821143 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-09 04:31:42.821153 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-09 04:31:42.821162 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-09 04:31:42.821172 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-09 04:31:42.821181 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-09 04:31:42.821209 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-09 04:31:42.821219 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-09 04:31:42.821229 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-09 04:31:42.821238 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-09 04:31:42.821248 | orchestrator | 2026-04-09 04:31:42.821258 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-09 04:31:42.821267 | orchestrator | Thursday 09 April 2026 04:31:38 +0000 (0:00:10.690) 0:05:24.165 ******** 2026-04-09 04:31:42.821277 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:31:42.821287 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:31:42.821296 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:31:42.821306 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:31:42.821316 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:31:42.821325 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:31:42.821335 | orchestrator | 2026-04-09 04:31:42.821345 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-09 04:31:42.821362 | orchestrator | Thursday 09 April 2026 04:31:40 +0000 (0:00:01.675) 0:05:25.841 ******** 2026-04-09 04:31:42.821373 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:31:42.821385 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:31:42.821402 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:31:42.821419 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:31:42.821436 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:31:42.821453 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:31:42.821470 | orchestrator | 2026-04-09 04:31:42.821487 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 04:31:42.821523 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 04:31:42.821539 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-09 04:31:42.821549 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-09 04:31:42.821559 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-09 04:31:42.821569 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 04:31:42.821578 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 04:31:42.821588 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 04:31:42.821598 | orchestrator | 2026-04-09 04:31:42.821608 | orchestrator | 2026-04-09 04:31:42.821617 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 04:31:42.821627 | orchestrator | Thursday 09 April 2026 04:31:42 +0000 (0:00:02.617) 0:05:28.459 ******** 2026-04-09 04:31:42.821637 | orchestrator | =============================================================================== 2026-04-09 04:31:42.821647 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 56.86s 2026-04-09 04:31:42.821656 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 24.19s 2026-04-09 04:31:42.821667 | orchestrator | Manage labels ---------------------------------------------------------- 10.69s 2026-04-09 04:31:42.821676 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.47s 2026-04-09 04:31:42.821686 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.27s 2026-04-09 04:31:42.821696 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 6.02s 2026-04-09 04:31:42.821706 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.61s 2026-04-09 04:31:42.821715 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.44s 2026-04-09 04:31:42.821725 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 4.10s 2026-04-09 04:31:42.821735 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 3.90s 2026-04-09 04:31:42.821745 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 3.19s 2026-04-09 04:31:42.821754 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 3.12s 2026-04-09 04:31:42.821764 | orchestrator | k3s_server : Kill the temporary service used for initialization --------- 3.05s 2026-04-09 04:31:42.821774 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 2.98s 2026-04-09 04:31:42.821783 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.92s 2026-04-09 04:31:42.821803 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.90s 2026-04-09 04:31:42.821813 | orchestrator | kubectl : Install required packages ------------------------------------- 2.85s 2026-04-09 04:31:42.821822 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.76s 2026-04-09 04:31:42.821840 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.73s 2026-04-09 04:31:43.078164 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.73s 2026-04-09 04:31:43.226004 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-09 04:31:43.226211 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-04-09 04:31:43.233868 | orchestrator | + set -e 2026-04-09 04:31:43.233946 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 04:31:43.233968 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 04:31:43.233988 | orchestrator | ++ INTERACTIVE=false 2026-04-09 04:31:43.234006 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 04:31:43.234087 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 04:31:43.234108 | orchestrator | + osism apply openstackclient 2026-04-09 04:31:54.553382 | orchestrator | 2026-04-09 04:31:54 | INFO  | Prepare task for execution of openstackclient. 2026-04-09 04:31:54.659340 | orchestrator | 2026-04-09 04:31:54 | INFO  | Task 2489a3cd-f4ea-47e2-af81-0c1f3a48b610 (openstackclient) was prepared for execution. 2026-04-09 04:31:54.659414 | orchestrator | 2026-04-09 04:31:54 | INFO  | It takes a moment until task 2489a3cd-f4ea-47e2-af81-0c1f3a48b610 (openstackclient) has been started and output is visible here. 2026-04-09 04:32:34.639381 | orchestrator | 2026-04-09 04:32:34.639493 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-09 04:32:34.639511 | orchestrator | 2026-04-09 04:32:34.639524 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-09 04:32:34.639535 | orchestrator | Thursday 09 April 2026 04:32:01 +0000 (0:00:03.115) 0:00:03.115 ******** 2026-04-09 04:32:34.639547 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-09 04:32:34.639560 | orchestrator | 2026-04-09 04:32:34.639571 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-09 04:32:34.639583 | orchestrator | Thursday 09 April 2026 04:32:03 +0000 (0:00:01.902) 0:00:05.017 ******** 2026-04-09 04:32:34.639595 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-09 04:32:34.639607 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-09 04:32:34.639618 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-09 04:32:34.639629 | orchestrator | 2026-04-09 04:32:34.639640 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-09 04:32:34.639651 | orchestrator | Thursday 09 April 2026 04:32:06 +0000 (0:00:02.731) 0:00:07.749 ******** 2026-04-09 04:32:34.639662 | orchestrator | changed: [testbed-manager] 2026-04-09 04:32:34.639673 | orchestrator | 2026-04-09 04:32:34.639685 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-09 04:32:34.639696 | orchestrator | Thursday 09 April 2026 04:32:08 +0000 (0:00:02.298) 0:00:10.047 ******** 2026-04-09 04:32:34.639707 | orchestrator | ok: [testbed-manager] 2026-04-09 04:32:34.639719 | orchestrator | 2026-04-09 04:32:34.639730 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-09 04:32:34.639741 | orchestrator | Thursday 09 April 2026 04:32:10 +0000 (0:00:02.216) 0:00:12.264 ******** 2026-04-09 04:32:34.639752 | orchestrator | ok: [testbed-manager] 2026-04-09 04:32:34.639763 | orchestrator | 2026-04-09 04:32:34.639774 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-09 04:32:34.639785 | orchestrator | Thursday 09 April 2026 04:32:12 +0000 (0:00:01.872) 0:00:14.137 ******** 2026-04-09 04:32:34.639796 | orchestrator | ok: [testbed-manager] 2026-04-09 04:32:34.639831 | orchestrator | 2026-04-09 04:32:34.639843 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-09 04:32:34.639854 | orchestrator | Thursday 09 April 2026 04:32:14 +0000 (0:00:01.662) 0:00:15.800 ******** 2026-04-09 04:32:34.639865 | orchestrator | changed: [testbed-manager] 2026-04-09 04:32:34.639876 | orchestrator | 2026-04-09 04:32:34.639887 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-09 04:32:34.639898 | orchestrator | Thursday 09 April 2026 04:32:28 +0000 (0:00:14.647) 0:00:30.447 ******** 2026-04-09 04:32:34.639909 | orchestrator | changed: [testbed-manager] 2026-04-09 04:32:34.639919 | orchestrator | 2026-04-09 04:32:34.639930 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-09 04:32:34.639941 | orchestrator | Thursday 09 April 2026 04:32:30 +0000 (0:00:01.834) 0:00:32.282 ******** 2026-04-09 04:32:34.639952 | orchestrator | changed: [testbed-manager] 2026-04-09 04:32:34.639963 | orchestrator | 2026-04-09 04:32:34.639974 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-09 04:32:34.639985 | orchestrator | Thursday 09 April 2026 04:32:32 +0000 (0:00:01.628) 0:00:33.911 ******** 2026-04-09 04:32:34.639995 | orchestrator | ok: [testbed-manager] 2026-04-09 04:32:34.640006 | orchestrator | 2026-04-09 04:32:34.640017 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 04:32:34.640028 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 04:32:34.640040 | orchestrator | 2026-04-09 04:32:34.640051 | orchestrator | 2026-04-09 04:32:34.640062 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 04:32:34.640072 | orchestrator | Thursday 09 April 2026 04:32:34 +0000 (0:00:01.922) 0:00:35.834 ******** 2026-04-09 04:32:34.640083 | orchestrator | =============================================================================== 2026-04-09 04:32:34.640094 | orchestrator | osism.services.openstackclient : Restart openstackclient service ------- 14.65s 2026-04-09 04:32:34.640140 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.73s 2026-04-09 04:32:34.640153 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.30s 2026-04-09 04:32:34.640164 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 2.22s 2026-04-09 04:32:34.640175 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.92s 2026-04-09 04:32:34.640186 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.90s 2026-04-09 04:32:34.640197 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.87s 2026-04-09 04:32:34.640208 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.84s 2026-04-09 04:32:34.640219 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.66s 2026-04-09 04:32:34.640230 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.63s 2026-04-09 04:32:34.899733 | orchestrator | + osism apply -a upgrade common 2026-04-09 04:32:36.295842 | orchestrator | 2026-04-09 04:32:36 | INFO  | Prepare task for execution of common. 2026-04-09 04:32:36.362473 | orchestrator | 2026-04-09 04:32:36 | INFO  | Task d32f77c0-9b87-4a59-8d4d-bbf436a14c71 (common) was prepared for execution. 2026-04-09 04:32:36.362589 | orchestrator | 2026-04-09 04:32:36 | INFO  | It takes a moment until task d32f77c0-9b87-4a59-8d4d-bbf436a14c71 (common) has been started and output is visible here. 2026-04-09 04:32:54.135289 | orchestrator | 2026-04-09 04:32:54.135383 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-09 04:32:54.135397 | orchestrator | 2026-04-09 04:32:54.135407 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-09 04:32:54.135416 | orchestrator | Thursday 09 April 2026 04:32:42 +0000 (0:00:02.279) 0:00:02.279 ******** 2026-04-09 04:32:54.135426 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 04:32:54.135458 | orchestrator | 2026-04-09 04:32:54.135468 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-09 04:32:54.135477 | orchestrator | Thursday 09 April 2026 04:32:45 +0000 (0:00:03.153) 0:00:05.433 ******** 2026-04-09 04:32:54.135487 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 04:32:54.135496 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 04:32:54.135505 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 04:32:54.135513 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 04:32:54.135522 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 04:32:54.135531 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 04:32:54.135540 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 04:32:54.135549 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 04:32:54.135558 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 04:32:54.135566 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 04:32:54.135575 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 04:32:54.135584 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 04:32:54.135593 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 04:32:54.135601 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 04:32:54.135610 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 04:32:54.135619 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 04:32:54.135627 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 04:32:54.135637 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 04:32:54.135645 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 04:32:54.135654 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 04:32:54.135663 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 04:32:54.135671 | orchestrator | 2026-04-09 04:32:54.135680 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-09 04:32:54.135689 | orchestrator | Thursday 09 April 2026 04:32:49 +0000 (0:00:03.938) 0:00:09.372 ******** 2026-04-09 04:32:54.135698 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 04:32:54.135708 | orchestrator | 2026-04-09 04:32:54.135717 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-09 04:32:54.135741 | orchestrator | Thursday 09 April 2026 04:32:51 +0000 (0:00:02.831) 0:00:12.203 ******** 2026-04-09 04:32:54.135761 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:32:54.135776 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:32:54.135810 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:32:54.135822 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:32:54.135833 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:32:54.135844 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:32:54.135854 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:32:54.135869 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:32:54.135880 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:32:54.135910 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:32:57.281438 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:32:57.281543 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:32:57.281564 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:32:57.281609 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:32:57.281630 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:32:57.281680 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:32:57.281699 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:32:57.281742 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:32:57.281761 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:32:57.281777 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:32:57.281795 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:32:57.281813 | orchestrator | 2026-04-09 04:32:57.281833 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-09 04:32:57.281851 | orchestrator | Thursday 09 April 2026 04:32:56 +0000 (0:00:04.791) 0:00:16.994 ******** 2026-04-09 04:32:57.282247 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 04:32:57.282298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 04:32:57.282337 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:32:57.282358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:32:57.282410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 04:32:58.258968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:32:58.259091 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:32:58.259160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:32:58.259207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 04:32:58.259256 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:32:58.259276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:32:58.259292 | orchestrator | skipping: [testbed-manager] 2026-04-09 04:32:58.259307 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:32:58.259323 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 04:32:58.259339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:32:58.259388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:32:58.259405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 04:32:58.259422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:32:58.259438 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:32:58.259460 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 04:32:58.259486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:32:58.259504 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:32:58.259522 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:32:58.259540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:32:58.259577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:00.906288 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:33:00.906395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:00.906412 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:33:00.906422 | orchestrator | 2026-04-09 04:33:00.906432 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-09 04:33:00.906442 | orchestrator | Thursday 09 April 2026 04:32:59 +0000 (0:00:02.687) 0:00:19.682 ******** 2026-04-09 04:33:00.906453 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 04:33:00.906490 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:00.906501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 04:33:00.906549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:00.906560 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:00.906570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 04:33:00.906597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 04:33:00.906607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:00.906624 | orchestrator | skipping: [testbed-manager] 2026-04-09 04:33:00.906633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:00.906643 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:33:00.906656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:00.906666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 04:33:00.906675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:00.906685 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:33:00.906694 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 04:33:00.906711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:14.652558 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:33:14.652656 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:14.652690 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 04:33:14.652712 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:14.652720 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:14.652726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:14.652734 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:33:14.652740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:14.652746 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:33:14.652753 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:14.652759 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:33:14.652765 | orchestrator | 2026-04-09 04:33:14.652772 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-04-09 04:33:14.652795 | orchestrator | Thursday 09 April 2026 04:33:02 +0000 (0:00:03.473) 0:00:23.155 ******** 2026-04-09 04:33:14.652808 | orchestrator | skipping: [testbed-manager] 2026-04-09 04:33:14.652814 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:33:14.652820 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:33:14.652826 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:33:14.652832 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:33:14.652837 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:33:14.652843 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:33:14.652848 | orchestrator | 2026-04-09 04:33:14.652865 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-09 04:33:14.652871 | orchestrator | Thursday 09 April 2026 04:33:05 +0000 (0:00:02.351) 0:00:25.507 ******** 2026-04-09 04:33:14.652876 | orchestrator | skipping: [testbed-manager] 2026-04-09 04:33:14.652882 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:33:14.652887 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:33:14.652893 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:33:14.652898 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:33:14.652903 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:33:14.652909 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:33:14.652915 | orchestrator | 2026-04-09 04:33:14.652921 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-09 04:33:14.652926 | orchestrator | Thursday 09 April 2026 04:33:07 +0000 (0:00:02.290) 0:00:27.798 ******** 2026-04-09 04:33:14.652932 | orchestrator | skipping: [testbed-manager] 2026-04-09 04:33:14.652937 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:33:14.652943 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:33:14.652949 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:33:14.652954 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:33:14.652960 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:33:14.652966 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:33:14.652972 | orchestrator | 2026-04-09 04:33:14.652978 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-04-09 04:33:14.652983 | orchestrator | Thursday 09 April 2026 04:33:09 +0000 (0:00:02.237) 0:00:30.036 ******** 2026-04-09 04:33:14.652989 | orchestrator | changed: [testbed-manager] 2026-04-09 04:33:14.652995 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:33:14.653001 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:33:14.653007 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:33:14.653012 | orchestrator | changed: [testbed-node-3] 2026-04-09 04:33:14.653018 | orchestrator | changed: [testbed-node-4] 2026-04-09 04:33:14.653024 | orchestrator | changed: [testbed-node-5] 2026-04-09 04:33:14.653029 | orchestrator | 2026-04-09 04:33:14.653036 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-09 04:33:14.653042 | orchestrator | Thursday 09 April 2026 04:33:12 +0000 (0:00:02.883) 0:00:32.919 ******** 2026-04-09 04:33:14.653050 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:33:14.653059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:33:14.653065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:33:14.653080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:33:14.653098 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:17.071915 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:33:17.072018 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:33:17.072059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:17.072073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:17.072108 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:33:17.072197 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:17.072213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:17.072245 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:17.072258 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:17.072276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:17.072289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:17.072309 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:17.072321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:17.072333 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:17.072345 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:17.072365 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:39.166668 | orchestrator | 2026-04-09 04:33:39.166770 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-09 04:33:39.166785 | orchestrator | Thursday 09 April 2026 04:33:18 +0000 (0:00:05.515) 0:00:38.435 ******** 2026-04-09 04:33:39.166797 | orchestrator | [WARNING]: Skipped 2026-04-09 04:33:39.166809 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-09 04:33:39.166821 | orchestrator | to this access issue: 2026-04-09 04:33:39.166833 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-09 04:33:39.166843 | orchestrator | directory 2026-04-09 04:33:39.166854 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 04:33:39.166866 | orchestrator | 2026-04-09 04:33:39.166878 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-09 04:33:39.166889 | orchestrator | Thursday 09 April 2026 04:33:20 +0000 (0:00:02.495) 0:00:40.930 ******** 2026-04-09 04:33:39.166900 | orchestrator | [WARNING]: Skipped 2026-04-09 04:33:39.166911 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-09 04:33:39.166922 | orchestrator | to this access issue: 2026-04-09 04:33:39.166940 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-09 04:33:39.166952 | orchestrator | directory 2026-04-09 04:33:39.166963 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 04:33:39.166974 | orchestrator | 2026-04-09 04:33:39.166985 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-09 04:33:39.167015 | orchestrator | Thursday 09 April 2026 04:33:22 +0000 (0:00:01.981) 0:00:42.911 ******** 2026-04-09 04:33:39.167026 | orchestrator | [WARNING]: Skipped 2026-04-09 04:33:39.167038 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-09 04:33:39.167049 | orchestrator | to this access issue: 2026-04-09 04:33:39.167060 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-09 04:33:39.167071 | orchestrator | directory 2026-04-09 04:33:39.167082 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 04:33:39.167093 | orchestrator | 2026-04-09 04:33:39.167104 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-09 04:33:39.167145 | orchestrator | Thursday 09 April 2026 04:33:25 +0000 (0:00:02.525) 0:00:45.437 ******** 2026-04-09 04:33:39.167157 | orchestrator | [WARNING]: Skipped 2026-04-09 04:33:39.167168 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-09 04:33:39.167179 | orchestrator | to this access issue: 2026-04-09 04:33:39.167190 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-09 04:33:39.167201 | orchestrator | directory 2026-04-09 04:33:39.167212 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 04:33:39.167223 | orchestrator | 2026-04-09 04:33:39.167233 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-09 04:33:39.167245 | orchestrator | Thursday 09 April 2026 04:33:27 +0000 (0:00:02.123) 0:00:47.560 ******** 2026-04-09 04:33:39.167256 | orchestrator | changed: [testbed-manager] 2026-04-09 04:33:39.167267 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:33:39.167278 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:33:39.167289 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:33:39.167300 | orchestrator | changed: [testbed-node-4] 2026-04-09 04:33:39.167311 | orchestrator | changed: [testbed-node-3] 2026-04-09 04:33:39.167322 | orchestrator | changed: [testbed-node-5] 2026-04-09 04:33:39.167333 | orchestrator | 2026-04-09 04:33:39.167343 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-09 04:33:39.167354 | orchestrator | Thursday 09 April 2026 04:33:31 +0000 (0:00:04.207) 0:00:51.768 ******** 2026-04-09 04:33:39.167366 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 04:33:39.167378 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 04:33:39.167389 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 04:33:39.167400 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 04:33:39.167411 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 04:33:39.167421 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 04:33:39.167432 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 04:33:39.167443 | orchestrator | 2026-04-09 04:33:39.167454 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-09 04:33:39.167465 | orchestrator | Thursday 09 April 2026 04:33:34 +0000 (0:00:03.434) 0:00:55.203 ******** 2026-04-09 04:33:39.167476 | orchestrator | ok: [testbed-manager] 2026-04-09 04:33:39.167487 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:33:39.167498 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:33:39.167510 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:33:39.167521 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:33:39.167532 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:33:39.167542 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:33:39.167553 | orchestrator | 2026-04-09 04:33:39.167564 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-09 04:33:39.167585 | orchestrator | Thursday 09 April 2026 04:33:38 +0000 (0:00:03.285) 0:00:58.488 ******** 2026-04-09 04:33:39.167618 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:33:39.167639 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:39.167651 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:33:39.167663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:39.167675 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:33:39.167687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:39.167698 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:33:39.167734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:46.447379 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:33:46.447494 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:46.447514 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:46.447546 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:46.447559 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:33:46.447571 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:46.447605 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:46.447636 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:33:46.447654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:46.447666 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:46.447678 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:46.447690 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:46.447702 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:46.447715 | orchestrator | 2026-04-09 04:33:46.447728 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-09 04:33:46.447740 | orchestrator | Thursday 09 April 2026 04:33:41 +0000 (0:00:02.803) 0:01:01.291 ******** 2026-04-09 04:33:46.447752 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 04:33:46.447774 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 04:33:46.447786 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 04:33:46.447797 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 04:33:46.447808 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 04:33:46.447820 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 04:33:46.447831 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 04:33:46.447843 | orchestrator | 2026-04-09 04:33:46.447854 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-09 04:33:46.447866 | orchestrator | Thursday 09 April 2026 04:33:44 +0000 (0:00:03.186) 0:01:04.478 ******** 2026-04-09 04:33:46.447878 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 04:33:46.447889 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 04:33:46.447901 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 04:33:46.447913 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 04:33:46.447927 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 04:33:46.447941 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 04:33:46.447953 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 04:33:46.447966 | orchestrator | 2026-04-09 04:33:46.447987 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-04-09 04:33:49.965225 | orchestrator | Thursday 09 April 2026 04:33:47 +0000 (0:00:03.595) 0:01:08.074 ******** 2026-04-09 04:33:49.965330 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:33:49.965351 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:33:49.965362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:33:49.965371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:33:49.965400 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:33:49.965410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:33:49.965419 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 04:33:49.965452 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:49.965468 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:49.965478 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:49.965487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:49.965502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:49.965512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:49.965527 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:54.441284 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:54.441380 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:54.441392 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:54.441402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:54.441430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:54.441439 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:54.441447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 04:33:54.441456 | orchestrator | 2026-04-09 04:33:54.441465 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-04-09 04:33:54.441475 | orchestrator | Thursday 09 April 2026 04:33:52 +0000 (0:00:04.217) 0:01:12.291 ******** 2026-04-09 04:33:54.441484 | orchestrator | changed: [testbed-manager] => { 2026-04-09 04:33:54.441493 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 04:33:54.441501 | orchestrator | } 2026-04-09 04:33:54.441510 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 04:33:54.441518 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 04:33:54.441526 | orchestrator | } 2026-04-09 04:33:54.441534 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 04:33:54.441542 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 04:33:54.441550 | orchestrator | } 2026-04-09 04:33:54.441558 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 04:33:54.441566 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 04:33:54.441574 | orchestrator | } 2026-04-09 04:33:54.441582 | orchestrator | changed: [testbed-node-3] => { 2026-04-09 04:33:54.441590 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 04:33:54.441598 | orchestrator | } 2026-04-09 04:33:54.441606 | orchestrator | changed: [testbed-node-4] => { 2026-04-09 04:33:54.441614 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 04:33:54.441622 | orchestrator | } 2026-04-09 04:33:54.441630 | orchestrator | changed: [testbed-node-5] => { 2026-04-09 04:33:54.441638 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 04:33:54.441645 | orchestrator | } 2026-04-09 04:33:54.441653 | orchestrator | 2026-04-09 04:33:54.441661 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 04:33:54.441684 | orchestrator | Thursday 09 April 2026 04:33:54 +0000 (0:00:02.024) 0:01:14.316 ******** 2026-04-09 04:33:54.441694 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 04:33:54.441719 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:54.441729 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:54.441739 | orchestrator | skipping: [testbed-manager] 2026-04-09 04:33:54.441748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 04:33:54.441758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:54.441769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:33:54.441778 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:33:54.441788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 04:33:54.441809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:34:00.196024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:34:00.196185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 04:34:00.196206 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:34:00.196220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:34:00.196232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:34:00.196244 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:34:00.196256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 04:34:00.196268 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:34:00.196280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:34:00.196319 | orchestrator | skipping: [testbed-node-3] 2026-04-09 04:34:00.196367 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 04:34:00.196381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:34:00.196393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:34:00.196404 | orchestrator | skipping: [testbed-node-4] 2026-04-09 04:34:00.196415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 04:34:00.196427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:34:00.196439 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:34:00.196450 | orchestrator | skipping: [testbed-node-5] 2026-04-09 04:34:00.196461 | orchestrator | 2026-04-09 04:34:00.196473 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 04:34:00.196486 | orchestrator | Thursday 09 April 2026 04:33:57 +0000 (0:00:03.325) 0:01:17.641 ******** 2026-04-09 04:34:00.196497 | orchestrator | 2026-04-09 04:34:00.196508 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 04:34:00.196527 | orchestrator | Thursday 09 April 2026 04:33:57 +0000 (0:00:00.475) 0:01:18.117 ******** 2026-04-09 04:34:00.196538 | orchestrator | 2026-04-09 04:34:00.196549 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 04:34:00.196563 | orchestrator | Thursday 09 April 2026 04:33:58 +0000 (0:00:00.439) 0:01:18.557 ******** 2026-04-09 04:34:00.196577 | orchestrator | 2026-04-09 04:34:00.196590 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 04:34:00.196603 | orchestrator | Thursday 09 April 2026 04:33:58 +0000 (0:00:00.453) 0:01:19.010 ******** 2026-04-09 04:34:00.196616 | orchestrator | 2026-04-09 04:34:00.196630 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 04:34:00.196643 | orchestrator | Thursday 09 April 2026 04:33:59 +0000 (0:00:00.530) 0:01:19.541 ******** 2026-04-09 04:34:00.196656 | orchestrator | 2026-04-09 04:34:00.196669 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 04:34:00.196683 | orchestrator | Thursday 09 April 2026 04:33:59 +0000 (0:00:00.420) 0:01:19.962 ******** 2026-04-09 04:34:00.196696 | orchestrator | 2026-04-09 04:34:00.196715 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 04:34:00.196734 | orchestrator | Thursday 09 April 2026 04:34:00 +0000 (0:00:00.492) 0:01:20.454 ******** 2026-04-09 04:35:37.771168 | orchestrator | 2026-04-09 04:35:37.771319 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-09 04:35:37.771349 | orchestrator | Thursday 09 April 2026 04:34:01 +0000 (0:00:00.898) 0:01:21.352 ******** 2026-04-09 04:35:37.771370 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:35:37.771390 | orchestrator | changed: [testbed-node-4] 2026-04-09 04:35:37.771409 | orchestrator | changed: [testbed-node-5] 2026-04-09 04:35:37.771428 | orchestrator | changed: [testbed-manager] 2026-04-09 04:35:37.771446 | orchestrator | changed: [testbed-node-3] 2026-04-09 04:35:37.771464 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:35:37.771482 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:35:37.771502 | orchestrator | 2026-04-09 04:35:37.771520 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-09 04:35:37.771540 | orchestrator | Thursday 09 April 2026 04:34:42 +0000 (0:00:41.111) 0:02:02.464 ******** 2026-04-09 04:35:37.771558 | orchestrator | changed: [testbed-manager] 2026-04-09 04:35:37.771577 | orchestrator | changed: [testbed-node-3] 2026-04-09 04:35:37.771594 | orchestrator | changed: [testbed-node-5] 2026-04-09 04:35:37.771613 | orchestrator | changed: [testbed-node-4] 2026-04-09 04:35:37.771630 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:35:37.771649 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:35:37.771668 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:35:37.771686 | orchestrator | 2026-04-09 04:35:37.771706 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-09 04:35:37.771726 | orchestrator | Thursday 09 April 2026 04:35:21 +0000 (0:00:39.698) 0:02:42.163 ******** 2026-04-09 04:35:37.771745 | orchestrator | ok: [testbed-manager] 2026-04-09 04:35:37.771764 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:35:37.771777 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:35:37.771790 | orchestrator | ok: [testbed-node-3] 2026-04-09 04:35:37.771803 | orchestrator | ok: [testbed-node-4] 2026-04-09 04:35:37.771816 | orchestrator | ok: [testbed-node-5] 2026-04-09 04:35:37.771828 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:35:37.771841 | orchestrator | 2026-04-09 04:35:37.771854 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-09 04:35:37.771868 | orchestrator | Thursday 09 April 2026 04:35:25 +0000 (0:00:03.398) 0:02:45.562 ******** 2026-04-09 04:35:37.771881 | orchestrator | changed: [testbed-manager] 2026-04-09 04:35:37.771894 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:35:37.771906 | orchestrator | changed: [testbed-node-5] 2026-04-09 04:35:37.771917 | orchestrator | changed: [testbed-node-3] 2026-04-09 04:35:37.771928 | orchestrator | changed: [testbed-node-4] 2026-04-09 04:35:37.771967 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:35:37.771979 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:35:37.771989 | orchestrator | 2026-04-09 04:35:37.772001 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 04:35:37.772013 | orchestrator | testbed-manager : ok=22  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 04:35:37.772026 | orchestrator | testbed-node-0 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 04:35:37.772037 | orchestrator | testbed-node-1 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 04:35:37.772047 | orchestrator | testbed-node-2 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 04:35:37.772090 | orchestrator | testbed-node-3 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 04:35:37.772102 | orchestrator | testbed-node-4 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 04:35:37.772113 | orchestrator | testbed-node-5 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 04:35:37.772124 | orchestrator | 2026-04-09 04:35:37.772135 | orchestrator | 2026-04-09 04:35:37.772146 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 04:35:37.772159 | orchestrator | Thursday 09 April 2026 04:35:37 +0000 (0:00:11.917) 0:02:57.479 ******** 2026-04-09 04:35:37.772170 | orchestrator | =============================================================================== 2026-04-09 04:35:37.772180 | orchestrator | common : Restart fluentd container ------------------------------------- 41.11s 2026-04-09 04:35:37.772191 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 39.70s 2026-04-09 04:35:37.772202 | orchestrator | common : Restart cron container ---------------------------------------- 11.92s 2026-04-09 04:35:37.772213 | orchestrator | common : Copying over config.json files for services -------------------- 5.52s 2026-04-09 04:35:37.772224 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.79s 2026-04-09 04:35:37.772235 | orchestrator | service-check-containers : common | Check containers -------------------- 4.22s 2026-04-09 04:35:37.772246 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.21s 2026-04-09 04:35:37.772256 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.94s 2026-04-09 04:35:37.772267 | orchestrator | common : Flush handlers ------------------------------------------------- 3.71s 2026-04-09 04:35:37.772278 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.59s 2026-04-09 04:35:37.772305 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.47s 2026-04-09 04:35:37.772316 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.44s 2026-04-09 04:35:37.772349 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.40s 2026-04-09 04:35:37.772361 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.33s 2026-04-09 04:35:37.772372 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.28s 2026-04-09 04:35:37.772383 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.19s 2026-04-09 04:35:37.772394 | orchestrator | common : include_tasks -------------------------------------------------- 3.15s 2026-04-09 04:35:37.772405 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.88s 2026-04-09 04:35:37.772416 | orchestrator | common : include_tasks -------------------------------------------------- 2.83s 2026-04-09 04:35:37.772427 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.80s 2026-04-09 04:35:38.004040 | orchestrator | + osism apply -a upgrade loadbalancer 2026-04-09 04:35:39.394642 | orchestrator | 2026-04-09 04:35:39 | INFO  | Prepare task for execution of loadbalancer. 2026-04-09 04:35:39.460695 | orchestrator | 2026-04-09 04:35:39 | INFO  | Task e9d85884-e9f7-438c-a8c2-04b1c4394f21 (loadbalancer) was prepared for execution. 2026-04-09 04:35:39.460783 | orchestrator | 2026-04-09 04:35:39 | INFO  | It takes a moment until task e9d85884-e9f7-438c-a8c2-04b1c4394f21 (loadbalancer) has been started and output is visible here. 2026-04-09 04:36:12.702641 | orchestrator | 2026-04-09 04:36:12.702732 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 04:36:12.702743 | orchestrator | 2026-04-09 04:36:12.702750 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 04:36:12.702756 | orchestrator | Thursday 09 April 2026 04:35:45 +0000 (0:00:02.219) 0:00:02.219 ******** 2026-04-09 04:36:12.702763 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:36:12.702775 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:36:12.702784 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:36:12.702794 | orchestrator | 2026-04-09 04:36:12.702804 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 04:36:12.702815 | orchestrator | Thursday 09 April 2026 04:35:47 +0000 (0:00:02.375) 0:00:04.595 ******** 2026-04-09 04:36:12.702826 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-09 04:36:12.702836 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-09 04:36:12.702842 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-09 04:36:12.702848 | orchestrator | 2026-04-09 04:36:12.702855 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-09 04:36:12.702860 | orchestrator | 2026-04-09 04:36:12.702866 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-09 04:36:12.702872 | orchestrator | Thursday 09 April 2026 04:35:49 +0000 (0:00:02.435) 0:00:07.030 ******** 2026-04-09 04:36:12.702879 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:36:12.702885 | orchestrator | 2026-04-09 04:36:12.702891 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-04-09 04:36:12.702897 | orchestrator | Thursday 09 April 2026 04:35:51 +0000 (0:00:02.065) 0:00:09.096 ******** 2026-04-09 04:36:12.702903 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:36:12.702909 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:36:12.702915 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:36:12.702921 | orchestrator | 2026-04-09 04:36:12.702927 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-04-09 04:36:12.702933 | orchestrator | Thursday 09 April 2026 04:35:54 +0000 (0:00:02.511) 0:00:11.608 ******** 2026-04-09 04:36:12.702938 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:36:12.702944 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:36:12.702950 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:36:12.702956 | orchestrator | 2026-04-09 04:36:12.702962 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-09 04:36:12.702968 | orchestrator | Thursday 09 April 2026 04:35:56 +0000 (0:00:02.165) 0:00:13.773 ******** 2026-04-09 04:36:12.702973 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:36:12.702979 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:36:12.702985 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:36:12.702991 | orchestrator | 2026-04-09 04:36:12.702997 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-09 04:36:12.703002 | orchestrator | Thursday 09 April 2026 04:35:58 +0000 (0:00:01.723) 0:00:15.497 ******** 2026-04-09 04:36:12.703008 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:36:12.703014 | orchestrator | 2026-04-09 04:36:12.703020 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-09 04:36:12.703079 | orchestrator | Thursday 09 April 2026 04:36:00 +0000 (0:00:01.769) 0:00:17.267 ******** 2026-04-09 04:36:12.703086 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:36:12.703092 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:36:12.703098 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:36:12.703104 | orchestrator | 2026-04-09 04:36:12.703110 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-09 04:36:12.703115 | orchestrator | Thursday 09 April 2026 04:36:01 +0000 (0:00:01.771) 0:00:19.039 ******** 2026-04-09 04:36:12.703121 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-09 04:36:12.703127 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-09 04:36:12.703133 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-09 04:36:12.703139 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-09 04:36:12.703145 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-09 04:36:12.703151 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-09 04:36:12.703157 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-09 04:36:12.703165 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-09 04:36:12.703171 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-09 04:36:12.703176 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-09 04:36:12.703182 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-09 04:36:12.703190 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-09 04:36:12.703197 | orchestrator | 2026-04-09 04:36:12.703204 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-09 04:36:12.703210 | orchestrator | Thursday 09 April 2026 04:36:05 +0000 (0:00:03.452) 0:00:22.492 ******** 2026-04-09 04:36:12.703218 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-04-09 04:36:12.703225 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-04-09 04:36:12.703232 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-04-09 04:36:12.703239 | orchestrator | 2026-04-09 04:36:12.703246 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-09 04:36:12.703265 | orchestrator | Thursday 09 April 2026 04:36:07 +0000 (0:00:01.764) 0:00:24.256 ******** 2026-04-09 04:36:12.703272 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-04-09 04:36:12.703279 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-04-09 04:36:12.703286 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-04-09 04:36:12.703293 | orchestrator | 2026-04-09 04:36:12.703300 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-09 04:36:12.703307 | orchestrator | Thursday 09 April 2026 04:36:09 +0000 (0:00:02.287) 0:00:26.544 ******** 2026-04-09 04:36:12.703313 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-09 04:36:12.703320 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:36:12.703327 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-09 04:36:12.703334 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:36:12.703341 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-09 04:36:12.703348 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:36:12.703354 | orchestrator | 2026-04-09 04:36:12.703361 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-09 04:36:12.703368 | orchestrator | Thursday 09 April 2026 04:36:11 +0000 (0:00:02.060) 0:00:28.605 ******** 2026-04-09 04:36:12.703385 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 04:36:12.703405 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 04:36:12.703413 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 04:36:12.703423 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 04:36:12.703430 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 04:36:12.703442 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 04:36:24.626473 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 04:36:24.626567 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 04:36:24.626575 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 04:36:24.626580 | orchestrator | 2026-04-09 04:36:24.626586 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-09 04:36:24.626592 | orchestrator | Thursday 09 April 2026 04:36:14 +0000 (0:00:02.707) 0:00:31.312 ******** 2026-04-09 04:36:24.626597 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:36:24.626603 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:36:24.626607 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:36:24.626612 | orchestrator | 2026-04-09 04:36:24.626617 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-09 04:36:24.626621 | orchestrator | Thursday 09 April 2026 04:36:16 +0000 (0:00:02.305) 0:00:33.617 ******** 2026-04-09 04:36:24.626626 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-04-09 04:36:24.626632 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-04-09 04:36:24.626636 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-04-09 04:36:24.626641 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-04-09 04:36:24.626645 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-04-09 04:36:24.626659 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-04-09 04:36:24.626664 | orchestrator | 2026-04-09 04:36:24.626669 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-09 04:36:24.626673 | orchestrator | Thursday 09 April 2026 04:36:19 +0000 (0:00:02.650) 0:00:36.268 ******** 2026-04-09 04:36:24.626678 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:36:24.626683 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:36:24.626687 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:36:24.626692 | orchestrator | 2026-04-09 04:36:24.626696 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-09 04:36:24.626701 | orchestrator | Thursday 09 April 2026 04:36:21 +0000 (0:00:02.013) 0:00:38.281 ******** 2026-04-09 04:36:24.626705 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:36:24.626710 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:36:24.626714 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:36:24.626719 | orchestrator | 2026-04-09 04:36:24.626723 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-09 04:36:24.626728 | orchestrator | Thursday 09 April 2026 04:36:23 +0000 (0:00:02.630) 0:00:40.911 ******** 2026-04-09 04:36:24.626733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 04:36:24.626755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 04:36:24.626763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 04:36:24.626772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__af00205119d6f0b606292baf58b7cddf65cd97b5', '__omit_place_holder__af00205119d6f0b606292baf58b7cddf65cd97b5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 04:36:24.626781 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:36:24.626788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 04:36:24.626801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 04:36:24.626809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 04:36:24.626822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__af00205119d6f0b606292baf58b7cddf65cd97b5', '__omit_place_holder__af00205119d6f0b606292baf58b7cddf65cd97b5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 04:36:24.626830 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:36:24.626843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 04:36:28.438767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 04:36:28.438906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 04:36:28.438958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__af00205119d6f0b606292baf58b7cddf65cd97b5', '__omit_place_holder__af00205119d6f0b606292baf58b7cddf65cd97b5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 04:36:28.438985 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:36:28.439007 | orchestrator | 2026-04-09 04:36:28.439128 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-09 04:36:28.439153 | orchestrator | Thursday 09 April 2026 04:36:25 +0000 (0:00:01.982) 0:00:42.894 ******** 2026-04-09 04:36:28.439173 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 04:36:28.439257 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 04:36:28.439278 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 04:36:28.439325 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 04:36:28.439345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 04:36:28.439373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__af00205119d6f0b606292baf58b7cddf65cd97b5', '__omit_place_holder__af00205119d6f0b606292baf58b7cddf65cd97b5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 04:36:28.439393 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 04:36:28.439424 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 04:36:28.439442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 04:36:28.439476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 04:36:41.494899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__af00205119d6f0b606292baf58b7cddf65cd97b5', '__omit_place_holder__af00205119d6f0b606292baf58b7cddf65cd97b5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 04:36:41.495073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__af00205119d6f0b606292baf58b7cddf65cd97b5', '__omit_place_holder__af00205119d6f0b606292baf58b7cddf65cd97b5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 04:36:41.495101 | orchestrator | 2026-04-09 04:36:41.495112 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-09 04:36:41.495123 | orchestrator | Thursday 09 April 2026 04:36:29 +0000 (0:00:03.850) 0:00:46.744 ******** 2026-04-09 04:36:41.495153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 04:36:41.495164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 04:36:41.495173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 04:36:41.495216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 04:36:41.495227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 04:36:41.495236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 04:36:41.495256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 04:36:41.495266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 04:36:41.495275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 04:36:41.495284 | orchestrator | 2026-04-09 04:36:41.495294 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-09 04:36:41.495303 | orchestrator | Thursday 09 April 2026 04:36:34 +0000 (0:00:04.502) 0:00:51.247 ******** 2026-04-09 04:36:41.495312 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-09 04:36:41.495322 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-09 04:36:41.495331 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-09 04:36:41.495339 | orchestrator | 2026-04-09 04:36:41.495348 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-09 04:36:41.495357 | orchestrator | Thursday 09 April 2026 04:36:37 +0000 (0:00:02.993) 0:00:54.240 ******** 2026-04-09 04:36:41.495366 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-09 04:36:41.495375 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-09 04:36:41.495384 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-09 04:36:41.495393 | orchestrator | 2026-04-09 04:36:41.495402 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-09 04:36:41.495416 | orchestrator | Thursday 09 April 2026 04:36:41 +0000 (0:00:04.392) 0:00:58.633 ******** 2026-04-09 04:37:03.616947 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:37:03.617135 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:37:03.617152 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:37:03.617166 | orchestrator | 2026-04-09 04:37:03.617179 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-09 04:37:03.617192 | orchestrator | Thursday 09 April 2026 04:36:43 +0000 (0:00:01.691) 0:01:00.325 ******** 2026-04-09 04:37:03.617204 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-09 04:37:03.617217 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-09 04:37:03.617228 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-09 04:37:03.617270 | orchestrator | 2026-04-09 04:37:03.617283 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-09 04:37:03.617295 | orchestrator | Thursday 09 April 2026 04:36:46 +0000 (0:00:03.282) 0:01:03.608 ******** 2026-04-09 04:37:03.617307 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-09 04:37:03.617319 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-09 04:37:03.617330 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-09 04:37:03.617341 | orchestrator | 2026-04-09 04:37:03.617352 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-09 04:37:03.617363 | orchestrator | Thursday 09 April 2026 04:36:49 +0000 (0:00:03.122) 0:01:06.730 ******** 2026-04-09 04:37:03.617374 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:37:03.617385 | orchestrator | 2026-04-09 04:37:03.617415 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-09 04:37:03.617427 | orchestrator | Thursday 09 April 2026 04:36:51 +0000 (0:00:01.967) 0:01:08.698 ******** 2026-04-09 04:37:03.617439 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-04-09 04:37:03.617454 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-04-09 04:37:03.617467 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-04-09 04:37:03.617480 | orchestrator | 2026-04-09 04:37:03.617493 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-09 04:37:03.617506 | orchestrator | Thursday 09 April 2026 04:36:54 +0000 (0:00:02.724) 0:01:11.423 ******** 2026-04-09 04:37:03.617519 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-09 04:37:03.617532 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-09 04:37:03.617545 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-09 04:37:03.617557 | orchestrator | 2026-04-09 04:37:03.617570 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-04-09 04:37:03.617583 | orchestrator | Thursday 09 April 2026 04:36:57 +0000 (0:00:02.995) 0:01:14.419 ******** 2026-04-09 04:37:03.617597 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:37:03.617609 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:37:03.617623 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:37:03.617636 | orchestrator | 2026-04-09 04:37:03.617647 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-04-09 04:37:03.617658 | orchestrator | Thursday 09 April 2026 04:36:58 +0000 (0:00:01.376) 0:01:15.796 ******** 2026-04-09 04:37:03.617669 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:37:03.617680 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:37:03.617691 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:37:03.617703 | orchestrator | 2026-04-09 04:37:03.617714 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-09 04:37:03.617725 | orchestrator | Thursday 09 April 2026 04:37:00 +0000 (0:00:01.707) 0:01:17.503 ******** 2026-04-09 04:37:03.617740 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 04:37:03.617776 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 04:37:03.617808 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 04:37:03.617820 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 04:37:03.617837 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 04:37:03.617850 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 04:37:03.617863 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 04:37:03.617875 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 04:37:03.617901 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 04:37:07.184839 | orchestrator | 2026-04-09 04:37:07.184934 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-09 04:37:07.184946 | orchestrator | Thursday 09 April 2026 04:37:04 +0000 (0:00:04.389) 0:01:21.893 ******** 2026-04-09 04:37:07.184957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 04:37:07.184983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 04:37:07.184992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 04:37:07.185047 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:37:07.185056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 04:37:07.185065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 04:37:07.185093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 04:37:07.185102 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:37:07.185126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 04:37:07.185135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 04:37:07.185146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 04:37:07.185155 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:37:07.185162 | orchestrator | 2026-04-09 04:37:07.185170 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-09 04:37:07.185177 | orchestrator | Thursday 09 April 2026 04:37:06 +0000 (0:00:01.987) 0:01:23.880 ******** 2026-04-09 04:37:07.185185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 04:37:07.185193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 04:37:07.185206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 04:37:07.185214 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:37:07.185233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 04:37:18.102107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 04:37:18.102237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 04:37:18.102255 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:37:18.102270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 04:37:18.102283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 04:37:18.102314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 04:37:18.102326 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:37:18.102338 | orchestrator | 2026-04-09 04:37:18.102350 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-09 04:37:18.102362 | orchestrator | Thursday 09 April 2026 04:37:08 +0000 (0:00:01.683) 0:01:25.564 ******** 2026-04-09 04:37:18.102373 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-09 04:37:18.102386 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-09 04:37:18.102397 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-09 04:37:18.102408 | orchestrator | 2026-04-09 04:37:18.102419 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-09 04:37:18.102430 | orchestrator | Thursday 09 April 2026 04:37:11 +0000 (0:00:02.827) 0:01:28.392 ******** 2026-04-09 04:37:18.102441 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-09 04:37:18.102452 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-09 04:37:18.102463 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-09 04:37:18.102474 | orchestrator | 2026-04-09 04:37:18.102503 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-09 04:37:18.102515 | orchestrator | Thursday 09 April 2026 04:37:13 +0000 (0:00:02.508) 0:01:30.900 ******** 2026-04-09 04:37:18.102526 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 04:37:18.102537 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 04:37:18.102548 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 04:37:18.102561 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 04:37:18.102573 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:37:18.102587 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 04:37:18.102600 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:37:18.102613 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 04:37:18.102625 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:37:18.102638 | orchestrator | 2026-04-09 04:37:18.102651 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-09 04:37:18.102664 | orchestrator | Thursday 09 April 2026 04:37:16 +0000 (0:00:02.344) 0:01:33.244 ******** 2026-04-09 04:37:18.102678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 04:37:18.102699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 04:37:18.103139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 04:37:18.103158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 04:37:18.103185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 04:37:22.425324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 04:37:22.425463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 04:37:22.425509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 04:37:22.425523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 04:37:22.425555 | orchestrator | 2026-04-09 04:37:22.426423 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-09 04:37:22.426485 | orchestrator | Thursday 09 April 2026 04:37:20 +0000 (0:00:04.077) 0:01:37.322 ******** 2026-04-09 04:37:22.426496 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 04:37:22.426506 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 04:37:22.426513 | orchestrator | } 2026-04-09 04:37:22.426521 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 04:37:22.426528 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 04:37:22.426535 | orchestrator | } 2026-04-09 04:37:22.426542 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 04:37:22.426549 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 04:37:22.426555 | orchestrator | } 2026-04-09 04:37:22.426563 | orchestrator | 2026-04-09 04:37:22.426570 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 04:37:22.426577 | orchestrator | Thursday 09 April 2026 04:37:21 +0000 (0:00:01.666) 0:01:38.989 ******** 2026-04-09 04:37:22.426586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 04:37:22.426630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 04:37:22.426639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 04:37:22.426660 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:37:22.426668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 04:37:22.426676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 04:37:22.426683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 04:37:22.426690 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:37:22.426697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 04:37:22.426704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 04:37:22.426722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 04:37:29.502412 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:37:29.502534 | orchestrator | 2026-04-09 04:37:29.502551 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-09 04:37:29.502565 | orchestrator | Thursday 09 April 2026 04:37:23 +0000 (0:00:01.898) 0:01:40.888 ******** 2026-04-09 04:37:29.502577 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:37:29.502590 | orchestrator | 2026-04-09 04:37:29.502601 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-09 04:37:29.502612 | orchestrator | Thursday 09 April 2026 04:37:25 +0000 (0:00:02.128) 0:01:43.017 ******** 2026-04-09 04:37:29.502628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:37:29.502646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 04:37:29.502659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 04:37:29.502672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 04:37:29.502718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:37:29.502754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:37:29.502767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 04:37:29.502779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 04:37:29.502791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 04:37:29.502802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 04:37:29.502833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 04:37:31.719775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 04:37:31.719891 | orchestrator | 2026-04-09 04:37:31.719917 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-09 04:37:31.719937 | orchestrator | Thursday 09 April 2026 04:37:30 +0000 (0:00:04.829) 0:01:47.846 ******** 2026-04-09 04:37:31.719959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:37:31.719983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 04:37:31.720075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 04:37:31.720130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 04:37:31.720167 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:37:31.720214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:37:31.720238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 04:37:31.720257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 04:37:31.720276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 04:37:31.720295 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:37:31.720314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:37:31.720356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 04:37:31.720387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 04:37:45.658282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 04:37:45.658386 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:37:45.658402 | orchestrator | 2026-04-09 04:37:45.658412 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-09 04:37:45.658422 | orchestrator | Thursday 09 April 2026 04:37:32 +0000 (0:00:02.270) 0:01:50.117 ******** 2026-04-09 04:37:45.658433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:37:45.658445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:37:45.658456 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:37:45.658465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:37:45.658475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:37:45.658508 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:37:45.658518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:37:45.658527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:37:45.658536 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:37:45.658544 | orchestrator | 2026-04-09 04:37:45.658553 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-09 04:37:45.658563 | orchestrator | Thursday 09 April 2026 04:37:35 +0000 (0:00:02.064) 0:01:52.181 ******** 2026-04-09 04:37:45.658571 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:37:45.658581 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:37:45.658590 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:37:45.658598 | orchestrator | 2026-04-09 04:37:45.658607 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-09 04:37:45.658616 | orchestrator | Thursday 09 April 2026 04:37:37 +0000 (0:00:02.252) 0:01:54.434 ******** 2026-04-09 04:37:45.658625 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:37:45.658634 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:37:45.658642 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:37:45.658651 | orchestrator | 2026-04-09 04:37:45.658660 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-09 04:37:45.658669 | orchestrator | Thursday 09 April 2026 04:37:40 +0000 (0:00:02.869) 0:01:57.303 ******** 2026-04-09 04:37:45.658678 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:37:45.658687 | orchestrator | 2026-04-09 04:37:45.658696 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-09 04:37:45.658705 | orchestrator | Thursday 09 April 2026 04:37:41 +0000 (0:00:01.714) 0:01:59.017 ******** 2026-04-09 04:37:45.658816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:37:45.658837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 04:37:45.658850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 04:37:45.658870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:37:45.658886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 04:37:45.658897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 04:37:45.658919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:37:47.740696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 04:37:47.740803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 04:37:47.740820 | orchestrator | 2026-04-09 04:37:47.740834 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-09 04:37:47.740846 | orchestrator | Thursday 09 April 2026 04:37:46 +0000 (0:00:05.021) 0:02:04.039 ******** 2026-04-09 04:37:47.740877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:37:47.740892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 04:37:47.740905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 04:37:47.740937 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:37:47.740971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:37:47.740985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 04:37:47.740997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 04:37:47.741014 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:37:47.741027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:37:47.741039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 04:37:47.741065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 04:38:04.706850 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:38:04.706982 | orchestrator | 2026-04-09 04:38:04.707009 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-09 04:38:04.707030 | orchestrator | Thursday 09 April 2026 04:37:48 +0000 (0:00:01.942) 0:02:05.981 ******** 2026-04-09 04:38:04.707051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:38:04.707074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:38:04.707093 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:38:04.707112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:38:04.707132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:38:04.707151 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:38:04.707171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:38:04.707273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:38:04.707291 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:38:04.707303 | orchestrator | 2026-04-09 04:38:04.707315 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-09 04:38:04.707327 | orchestrator | Thursday 09 April 2026 04:37:50 +0000 (0:00:01.747) 0:02:07.729 ******** 2026-04-09 04:38:04.707338 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:38:04.707350 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:38:04.707361 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:38:04.707372 | orchestrator | 2026-04-09 04:38:04.707385 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-09 04:38:04.707399 | orchestrator | Thursday 09 April 2026 04:37:52 +0000 (0:00:02.261) 0:02:09.990 ******** 2026-04-09 04:38:04.707413 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:38:04.707427 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:38:04.707465 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:38:04.707478 | orchestrator | 2026-04-09 04:38:04.707491 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-09 04:38:04.707505 | orchestrator | Thursday 09 April 2026 04:37:55 +0000 (0:00:03.003) 0:02:12.993 ******** 2026-04-09 04:38:04.707519 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:38:04.707532 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:38:04.707545 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:38:04.707558 | orchestrator | 2026-04-09 04:38:04.707572 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-09 04:38:04.707586 | orchestrator | Thursday 09 April 2026 04:37:57 +0000 (0:00:01.663) 0:02:14.657 ******** 2026-04-09 04:38:04.707599 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:38:04.707612 | orchestrator | 2026-04-09 04:38:04.707625 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-09 04:38:04.707638 | orchestrator | Thursday 09 April 2026 04:37:58 +0000 (0:00:01.464) 0:02:16.121 ******** 2026-04-09 04:38:04.707653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-09 04:38:04.707693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-09 04:38:04.707706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-09 04:38:04.707718 | orchestrator | 2026-04-09 04:38:04.707729 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-09 04:38:04.707741 | orchestrator | Thursday 09 April 2026 04:38:03 +0000 (0:00:04.274) 0:02:20.396 ******** 2026-04-09 04:38:04.707759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-09 04:38:04.707779 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:38:04.707791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-09 04:38:04.707803 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:38:04.707821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-09 04:38:18.157643 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:38:18.157758 | orchestrator | 2026-04-09 04:38:18.157775 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-09 04:38:18.157789 | orchestrator | Thursday 09 April 2026 04:38:05 +0000 (0:00:02.625) 0:02:23.022 ******** 2026-04-09 04:38:18.157803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-09 04:38:18.157818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-09 04:38:18.157831 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:38:18.157843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-09 04:38:18.157891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-09 04:38:18.157905 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:38:18.157917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-09 04:38:18.157928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-09 04:38:18.157940 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:38:18.157951 | orchestrator | 2026-04-09 04:38:18.157962 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-09 04:38:18.157974 | orchestrator | Thursday 09 April 2026 04:38:08 +0000 (0:00:02.734) 0:02:25.757 ******** 2026-04-09 04:38:18.157985 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:38:18.157996 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:38:18.158007 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:38:18.158079 | orchestrator | 2026-04-09 04:38:18.158092 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-09 04:38:18.158103 | orchestrator | Thursday 09 April 2026 04:38:10 +0000 (0:00:01.936) 0:02:27.693 ******** 2026-04-09 04:38:18.158115 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:38:18.158126 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:38:18.158137 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:38:18.158148 | orchestrator | 2026-04-09 04:38:18.158161 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-09 04:38:18.158175 | orchestrator | Thursday 09 April 2026 04:38:12 +0000 (0:00:02.161) 0:02:29.854 ******** 2026-04-09 04:38:18.158188 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:38:18.158201 | orchestrator | 2026-04-09 04:38:18.158215 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-09 04:38:18.158228 | orchestrator | Thursday 09 April 2026 04:38:14 +0000 (0:00:01.623) 0:02:31.478 ******** 2026-04-09 04:38:18.158267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:38:18.158300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:38:18.158339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 04:38:18.158354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 04:38:18.158366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 04:38:18.158387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 04:38:20.025566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 04:38:20.025689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 04:38:20.025719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:38:20.025741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 04:38:20.025755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 04:38:20.025851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 04:38:20.025868 | orchestrator | 2026-04-09 04:38:20.025881 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-09 04:38:20.025894 | orchestrator | Thursday 09 April 2026 04:38:19 +0000 (0:00:05.110) 0:02:36.588 ******** 2026-04-09 04:38:20.025913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:38:20.025926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 04:38:20.025938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 04:38:20.025950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 04:38:20.025969 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:38:20.025993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:38:30.554264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 04:38:30.554363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 04:38:30.554374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 04:38:30.554383 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:38:30.554393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:38:30.554439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 04:38:30.554467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 04:38:30.554475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 04:38:30.554483 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:38:30.554490 | orchestrator | 2026-04-09 04:38:30.554498 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-09 04:38:30.554506 | orchestrator | Thursday 09 April 2026 04:38:21 +0000 (0:00:01.842) 0:02:38.431 ******** 2026-04-09 04:38:30.554514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:38:30.554523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:38:30.554531 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:38:30.554538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:38:30.554545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:38:30.554558 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:38:30.554565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:38:30.554572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:38:30.554579 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:38:30.554586 | orchestrator | 2026-04-09 04:38:30.554593 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-09 04:38:30.554600 | orchestrator | Thursday 09 April 2026 04:38:23 +0000 (0:00:01.900) 0:02:40.332 ******** 2026-04-09 04:38:30.554606 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:38:30.554614 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:38:30.554621 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:38:30.554628 | orchestrator | 2026-04-09 04:38:30.554634 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-09 04:38:30.554641 | orchestrator | Thursday 09 April 2026 04:38:25 +0000 (0:00:02.671) 0:02:43.003 ******** 2026-04-09 04:38:30.554648 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:38:30.554663 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:38:30.554670 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:38:30.554685 | orchestrator | 2026-04-09 04:38:30.554693 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-09 04:38:30.554700 | orchestrator | Thursday 09 April 2026 04:38:28 +0000 (0:00:02.954) 0:02:45.957 ******** 2026-04-09 04:38:30.554706 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:38:30.554713 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:38:30.554720 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:38:30.554727 | orchestrator | 2026-04-09 04:38:30.554734 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-09 04:38:30.554740 | orchestrator | Thursday 09 April 2026 04:38:30 +0000 (0:00:01.406) 0:02:47.363 ******** 2026-04-09 04:38:30.554747 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:38:30.554754 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:38:30.554765 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:38:37.557888 | orchestrator | 2026-04-09 04:38:37.557978 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-09 04:38:37.557995 | orchestrator | Thursday 09 April 2026 04:38:31 +0000 (0:00:01.413) 0:02:48.777 ******** 2026-04-09 04:38:37.558008 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:38:37.558072 | orchestrator | 2026-04-09 04:38:37.558103 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-09 04:38:37.558116 | orchestrator | Thursday 09 April 2026 04:38:33 +0000 (0:00:01.985) 0:02:50.763 ******** 2026-04-09 04:38:37.558132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:38:37.558172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 04:38:37.558187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 04:38:37.558200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 04:38:37.558213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 04:38:37.558256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 04:38:37.558273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 04:38:37.558299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:38:37.558314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 04:38:37.558329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 04:38:37.558342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 04:38:37.558368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 04:38:39.377697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 04:38:39.377831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 04:38:39.377851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:38:39.377869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 04:38:39.377881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 04:38:39.377926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 04:38:39.377947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 04:38:39.377960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 04:38:39.377971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 04:38:39.377984 | orchestrator | 2026-04-09 04:38:39.377997 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-09 04:38:39.378009 | orchestrator | Thursday 09 April 2026 04:38:38 +0000 (0:00:05.251) 0:02:56.014 ******** 2026-04-09 04:38:39.378089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:38:39.378108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 04:38:39.378132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 04:38:39.919134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 04:38:39.919229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 04:38:39.919244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 04:38:39.919255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 04:38:39.919267 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:38:39.919296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:38:39.919347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 04:38:39.919361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 04:38:39.919372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 04:38:39.919384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 04:38:39.919395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 04:38:39.919407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 04:38:39.919424 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:38:39.919442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:38:56.818429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 04:38:56.818579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 04:38:56.818598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 04:38:56.818701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 04:38:56.818724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 04:38:56.818756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 04:38:56.818768 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:38:56.818780 | orchestrator | 2026-04-09 04:38:56.818792 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-09 04:38:56.818803 | orchestrator | Thursday 09 April 2026 04:38:41 +0000 (0:00:02.675) 0:02:58.689 ******** 2026-04-09 04:38:56.818829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:38:56.818843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:38:56.818855 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:38:56.818865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:38:56.818875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:38:56.818885 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:38:56.818895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:38:56.818905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:38:56.818914 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:38:56.818924 | orchestrator | 2026-04-09 04:38:56.818934 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-09 04:38:56.818944 | orchestrator | Thursday 09 April 2026 04:38:43 +0000 (0:00:01.967) 0:03:00.657 ******** 2026-04-09 04:38:56.818954 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:38:56.818966 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:38:56.818978 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:38:56.818989 | orchestrator | 2026-04-09 04:38:56.819001 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-09 04:38:56.819013 | orchestrator | Thursday 09 April 2026 04:38:45 +0000 (0:00:02.281) 0:03:02.939 ******** 2026-04-09 04:38:56.819024 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:38:56.819036 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:38:56.819057 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:38:56.819069 | orchestrator | 2026-04-09 04:38:56.819080 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-09 04:38:56.819091 | orchestrator | Thursday 09 April 2026 04:38:48 +0000 (0:00:03.006) 0:03:05.945 ******** 2026-04-09 04:38:56.819103 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:38:56.819114 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:38:56.819125 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:38:56.819136 | orchestrator | 2026-04-09 04:38:56.819148 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-09 04:38:56.819160 | orchestrator | Thursday 09 April 2026 04:38:50 +0000 (0:00:01.746) 0:03:07.692 ******** 2026-04-09 04:38:56.819171 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:38:56.819183 | orchestrator | 2026-04-09 04:38:56.819194 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-09 04:38:56.819206 | orchestrator | Thursday 09 April 2026 04:38:52 +0000 (0:00:01.682) 0:03:09.374 ******** 2026-04-09 04:38:56.819234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 04:38:56.965282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 04:38:56.965372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 04:38:56.965390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 04:38:56.965402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 04:38:56.965411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 04:39:01.591394 | orchestrator | 2026-04-09 04:39:01.591498 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-09 04:39:01.591517 | orchestrator | Thursday 09 April 2026 04:38:58 +0000 (0:00:05.929) 0:03:15.304 ******** 2026-04-09 04:39:01.591543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 04:39:01.591561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 04:39:01.591594 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:39:01.591631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 04:39:01.591685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 04:39:01.591699 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:39:01.591721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 04:39:19.657090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 04:39:19.657241 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:39:19.657273 | orchestrator | 2026-04-09 04:39:19.657294 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-09 04:39:19.657316 | orchestrator | Thursday 09 April 2026 04:39:02 +0000 (0:00:04.573) 0:03:19.877 ******** 2026-04-09 04:39:19.657336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 04:39:19.657388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 04:39:19.657411 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:39:19.657431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 04:39:19.657490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 04:39:19.657512 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:39:19.657532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 04:39:19.657553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 04:39:19.657574 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:39:19.657594 | orchestrator | 2026-04-09 04:39:19.657607 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-09 04:39:19.657618 | orchestrator | Thursday 09 April 2026 04:39:07 +0000 (0:00:04.669) 0:03:24.547 ******** 2026-04-09 04:39:19.657629 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:39:19.657641 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:39:19.657652 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:39:19.657662 | orchestrator | 2026-04-09 04:39:19.657674 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-09 04:39:19.657696 | orchestrator | Thursday 09 April 2026 04:39:09 +0000 (0:00:02.579) 0:03:27.127 ******** 2026-04-09 04:39:19.657708 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:39:19.657719 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:39:19.657729 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:39:19.657740 | orchestrator | 2026-04-09 04:39:19.657751 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-09 04:39:19.657762 | orchestrator | Thursday 09 April 2026 04:39:13 +0000 (0:00:03.034) 0:03:30.162 ******** 2026-04-09 04:39:19.657773 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:39:19.657818 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:39:19.657829 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:39:19.657840 | orchestrator | 2026-04-09 04:39:19.657851 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-09 04:39:19.657862 | orchestrator | Thursday 09 April 2026 04:39:14 +0000 (0:00:01.467) 0:03:31.629 ******** 2026-04-09 04:39:19.657873 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:39:19.657884 | orchestrator | 2026-04-09 04:39:19.657895 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-09 04:39:19.657906 | orchestrator | Thursday 09 April 2026 04:39:16 +0000 (0:00:01.970) 0:03:33.600 ******** 2026-04-09 04:39:19.657918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:39:19.657940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:39:37.091376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:39:37.091522 | orchestrator | 2026-04-09 04:39:37.091550 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-09 04:39:37.091572 | orchestrator | Thursday 09 April 2026 04:39:20 +0000 (0:00:04.481) 0:03:38.082 ******** 2026-04-09 04:39:37.091593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:39:37.091639 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:39:37.091661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:39:37.091681 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:39:37.091700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:39:37.091719 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:39:37.091738 | orchestrator | 2026-04-09 04:39:37.091757 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-09 04:39:37.091777 | orchestrator | Thursday 09 April 2026 04:39:22 +0000 (0:00:01.425) 0:03:39.508 ******** 2026-04-09 04:39:37.091798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:39:37.091821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:39:37.091842 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:39:37.091923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:39:37.091947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:39:37.091969 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:39:37.091989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:39:37.092022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:39:37.092042 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:39:37.092062 | orchestrator | 2026-04-09 04:39:37.092082 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-09 04:39:37.092103 | orchestrator | Thursday 09 April 2026 04:39:24 +0000 (0:00:01.860) 0:03:41.369 ******** 2026-04-09 04:39:37.092123 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:39:37.092143 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:39:37.092163 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:39:37.092182 | orchestrator | 2026-04-09 04:39:37.092200 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-09 04:39:37.092219 | orchestrator | Thursday 09 April 2026 04:39:26 +0000 (0:00:02.309) 0:03:43.678 ******** 2026-04-09 04:39:37.092236 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:39:37.092254 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:39:37.092272 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:39:37.092290 | orchestrator | 2026-04-09 04:39:37.092308 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-09 04:39:37.092326 | orchestrator | Thursday 09 April 2026 04:39:29 +0000 (0:00:03.208) 0:03:46.887 ******** 2026-04-09 04:39:37.092344 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:39:37.092363 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:39:37.092381 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:39:37.092398 | orchestrator | 2026-04-09 04:39:37.092417 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-09 04:39:37.092435 | orchestrator | Thursday 09 April 2026 04:39:31 +0000 (0:00:01.525) 0:03:48.413 ******** 2026-04-09 04:39:37.092453 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:39:37.092471 | orchestrator | 2026-04-09 04:39:37.092489 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-09 04:39:37.092507 | orchestrator | Thursday 09 April 2026 04:39:33 +0000 (0:00:02.206) 0:03:50.619 ******** 2026-04-09 04:39:37.092543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 04:39:39.242589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 04:39:39.242731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 04:39:39.242787 | orchestrator | 2026-04-09 04:39:39.242810 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-09 04:39:39.242830 | orchestrator | Thursday 09 April 2026 04:39:38 +0000 (0:00:04.882) 0:03:55.502 ******** 2026-04-09 04:39:39.242851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 04:39:39.242871 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:39:39.242950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 04:39:49.867842 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:39:49.867962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 04:39:49.868097 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:39:49.868124 | orchestrator | 2026-04-09 04:39:49.868143 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-09 04:39:49.868162 | orchestrator | Thursday 09 April 2026 04:39:40 +0000 (0:00:02.092) 0:03:57.594 ******** 2026-04-09 04:39:49.868200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-09 04:39:49.868234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 04:39:49.868258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-09 04:39:49.868280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 04:39:49.868293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-09 04:39:49.868306 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:39:49.868336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-09 04:39:49.868350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 04:39:49.868361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-09 04:39:49.868373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 04:39:49.868384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-09 04:39:49.868395 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:39:49.868407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-09 04:39:49.868429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 04:39:49.868441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-09 04:39:49.868458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 04:39:49.868469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-09 04:39:49.868480 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:39:49.868492 | orchestrator | 2026-04-09 04:39:49.868503 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-09 04:39:49.868514 | orchestrator | Thursday 09 April 2026 04:39:42 +0000 (0:00:02.267) 0:03:59.862 ******** 2026-04-09 04:39:49.868525 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:39:49.868537 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:39:49.868548 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:39:49.868559 | orchestrator | 2026-04-09 04:39:49.868570 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-09 04:39:49.868581 | orchestrator | Thursday 09 April 2026 04:39:45 +0000 (0:00:02.299) 0:04:02.161 ******** 2026-04-09 04:39:49.868592 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:39:49.868603 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:39:49.868614 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:39:49.868625 | orchestrator | 2026-04-09 04:39:49.868636 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-09 04:39:49.868647 | orchestrator | Thursday 09 April 2026 04:39:47 +0000 (0:00:02.958) 0:04:05.120 ******** 2026-04-09 04:39:49.868658 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:39:49.868669 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:39:49.868680 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:39:49.868691 | orchestrator | 2026-04-09 04:39:49.868702 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-09 04:39:49.868713 | orchestrator | Thursday 09 April 2026 04:39:49 +0000 (0:00:01.749) 0:04:06.870 ******** 2026-04-09 04:39:49.868731 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:39:58.298227 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:39:58.298335 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:39:58.298352 | orchestrator | 2026-04-09 04:39:58.298365 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-09 04:39:58.298378 | orchestrator | Thursday 09 April 2026 04:39:51 +0000 (0:00:01.429) 0:04:08.299 ******** 2026-04-09 04:39:58.298389 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:39:58.298400 | orchestrator | 2026-04-09 04:39:58.298412 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-09 04:39:58.298423 | orchestrator | Thursday 09 April 2026 04:39:53 +0000 (0:00:02.060) 0:04:10.360 ******** 2026-04-09 04:39:58.298439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 04:39:58.298481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 04:39:58.298496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 04:39:58.298523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 04:39:58.298555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 04:39:58.298578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 04:39:58.298590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 04:39:58.298602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 04:39:58.298618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 04:39:58.298630 | orchestrator | 2026-04-09 04:39:58.298642 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-09 04:39:58.298653 | orchestrator | Thursday 09 April 2026 04:39:57 +0000 (0:00:04.778) 0:04:15.138 ******** 2026-04-09 04:39:58.298674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 04:40:01.745683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 04:40:01.745788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 04:40:01.745802 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:40:01.745832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 04:40:01.745844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 04:40:01.745854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 04:40:01.745881 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:40:01.745907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 04:40:01.745918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 04:40:01.745928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 04:40:01.745937 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:40:01.745946 | orchestrator | 2026-04-09 04:40:01.745956 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-09 04:40:01.745967 | orchestrator | Thursday 09 April 2026 04:39:59 +0000 (0:00:01.707) 0:04:16.846 ******** 2026-04-09 04:40:01.745981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-09 04:40:01.745993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-09 04:40:01.746003 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:40:01.746095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-09 04:40:01.746108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-09 04:40:01.746124 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:40:01.746134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-09 04:40:01.746143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-09 04:40:01.746152 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:40:01.746161 | orchestrator | 2026-04-09 04:40:01.746170 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-09 04:40:01.746186 | orchestrator | Thursday 09 April 2026 04:40:01 +0000 (0:00:02.043) 0:04:18.890 ******** 2026-04-09 04:40:16.040330 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:40:16.040447 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:40:16.040462 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:40:16.040475 | orchestrator | 2026-04-09 04:40:16.040488 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-09 04:40:16.040501 | orchestrator | Thursday 09 April 2026 04:40:04 +0000 (0:00:02.315) 0:04:21.206 ******** 2026-04-09 04:40:16.040512 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:40:16.040523 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:40:16.040534 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:40:16.040545 | orchestrator | 2026-04-09 04:40:16.040556 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-09 04:40:16.040567 | orchestrator | Thursday 09 April 2026 04:40:07 +0000 (0:00:02.977) 0:04:24.184 ******** 2026-04-09 04:40:16.040578 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:40:16.040590 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:40:16.040601 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:40:16.040612 | orchestrator | 2026-04-09 04:40:16.040624 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-09 04:40:16.040635 | orchestrator | Thursday 09 April 2026 04:40:08 +0000 (0:00:01.387) 0:04:25.571 ******** 2026-04-09 04:40:16.040646 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:40:16.040657 | orchestrator | 2026-04-09 04:40:16.040668 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-09 04:40:16.040679 | orchestrator | Thursday 09 April 2026 04:40:10 +0000 (0:00:02.164) 0:04:27.736 ******** 2026-04-09 04:40:16.040696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:40:16.040729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:40:16.040765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 04:40:16.040798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 04:40:16.040813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:40:16.040833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 04:40:16.040855 | orchestrator | 2026-04-09 04:40:16.040869 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-09 04:40:16.040883 | orchestrator | Thursday 09 April 2026 04:40:15 +0000 (0:00:05.045) 0:04:32.781 ******** 2026-04-09 04:40:16.040898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:40:16.040921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 04:40:31.390937 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:40:31.391053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:40:31.391076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 04:40:31.391112 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:40:31.391141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:40:31.391155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 04:40:31.391167 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:40:31.391179 | orchestrator | 2026-04-09 04:40:31.391191 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-09 04:40:31.391204 | orchestrator | Thursday 09 April 2026 04:40:17 +0000 (0:00:01.988) 0:04:34.770 ******** 2026-04-09 04:40:31.391232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:40:31.391248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:40:31.391320 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:40:31.391334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:40:31.391346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:40:31.391358 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:40:31.391369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:40:31.391381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:40:31.391401 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:40:31.391413 | orchestrator | 2026-04-09 04:40:31.391424 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-09 04:40:31.391435 | orchestrator | Thursday 09 April 2026 04:40:19 +0000 (0:00:02.187) 0:04:36.957 ******** 2026-04-09 04:40:31.391446 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:40:31.391458 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:40:31.391469 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:40:31.391480 | orchestrator | 2026-04-09 04:40:31.391495 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-09 04:40:31.391508 | orchestrator | Thursday 09 April 2026 04:40:22 +0000 (0:00:02.355) 0:04:39.313 ******** 2026-04-09 04:40:31.391521 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:40:31.391534 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:40:31.391547 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:40:31.391560 | orchestrator | 2026-04-09 04:40:31.391573 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-09 04:40:31.391586 | orchestrator | Thursday 09 April 2026 04:40:25 +0000 (0:00:03.040) 0:04:42.354 ******** 2026-04-09 04:40:31.391599 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:40:31.391612 | orchestrator | 2026-04-09 04:40:31.391625 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-09 04:40:31.391638 | orchestrator | Thursday 09 April 2026 04:40:27 +0000 (0:00:02.228) 0:04:44.583 ******** 2026-04-09 04:40:31.391653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:40:31.391676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:40:33.316877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 04:40:33.317043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 04:40:33.317067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:40:33.317082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 04:40:33.317111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 04:40:33.317143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 04:40:33.317156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 04:40:33.317177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 04:40:33.317195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 04:40:33.317216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 04:40:33.317237 | orchestrator | 2026-04-09 04:40:33.317260 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-09 04:40:33.317359 | orchestrator | Thursday 09 April 2026 04:40:32 +0000 (0:00:05.512) 0:04:50.096 ******** 2026-04-09 04:40:33.317381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:40:33.317415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 04:40:35.677541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 04:40:35.677649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 04:40:35.677666 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:40:35.677698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:40:35.677711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 04:40:35.677724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 04:40:35.677775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 04:40:35.677789 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:40:35.677801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:40:35.677836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 04:40:35.677859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 04:40:35.677872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 04:40:35.677883 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:40:35.677895 | orchestrator | 2026-04-09 04:40:35.677907 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-09 04:40:35.677927 | orchestrator | Thursday 09 April 2026 04:40:35 +0000 (0:00:02.109) 0:04:52.205 ******** 2026-04-09 04:40:35.677940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:40:35.677956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:40:35.677969 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:40:35.677980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:40:35.677999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:40:51.887147 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:40:51.887244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:40:51.887258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:40:51.887268 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:40:51.887275 | orchestrator | 2026-04-09 04:40:51.887284 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-09 04:40:51.887292 | orchestrator | Thursday 09 April 2026 04:40:36 +0000 (0:00:01.810) 0:04:54.016 ******** 2026-04-09 04:40:51.887311 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:40:51.887319 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:40:51.887327 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:40:51.887334 | orchestrator | 2026-04-09 04:40:51.887341 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-09 04:40:51.887349 | orchestrator | Thursday 09 April 2026 04:40:39 +0000 (0:00:02.318) 0:04:56.335 ******** 2026-04-09 04:40:51.887356 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:40:51.887363 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:40:51.887370 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:40:51.887378 | orchestrator | 2026-04-09 04:40:51.887385 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-09 04:40:51.887439 | orchestrator | Thursday 09 April 2026 04:40:42 +0000 (0:00:03.057) 0:04:59.393 ******** 2026-04-09 04:40:51.887447 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:40:51.887454 | orchestrator | 2026-04-09 04:40:51.887462 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-09 04:40:51.887470 | orchestrator | Thursday 09 April 2026 04:40:44 +0000 (0:00:02.578) 0:05:01.972 ******** 2026-04-09 04:40:51.887477 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 04:40:51.887485 | orchestrator | 2026-04-09 04:40:51.887492 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-09 04:40:51.887499 | orchestrator | Thursday 09 April 2026 04:40:49 +0000 (0:00:04.432) 0:05:06.404 ******** 2026-04-09 04:40:51.887511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 04:40:51.887553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 04:40:51.887563 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:40:51.887575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 04:40:51.887590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 04:40:51.887598 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:40:51.887611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 04:40:55.824392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 04:40:55.824579 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:40:55.824620 | orchestrator | 2026-04-09 04:40:55.824633 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-09 04:40:55.824664 | orchestrator | Thursday 09 April 2026 04:40:53 +0000 (0:00:03.808) 0:05:10.213 ******** 2026-04-09 04:40:55.824692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 04:40:55.824727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 04:40:55.824740 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:40:55.824778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 04:40:55.824793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 04:40:55.824812 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:40:55.824824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 04:40:55.824844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 04:41:13.122645 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:41:13.122756 | orchestrator | 2026-04-09 04:41:13.122774 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-09 04:41:13.122786 | orchestrator | Thursday 09 April 2026 04:40:56 +0000 (0:00:03.914) 0:05:14.128 ******** 2026-04-09 04:41:13.122815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 04:41:13.122850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 04:41:13.122862 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:41:13.122873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 04:41:13.122883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 04:41:13.122894 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:41:13.122905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 04:41:13.122916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 04:41:13.122926 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:41:13.122936 | orchestrator | 2026-04-09 04:41:13.122946 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-09 04:41:13.122956 | orchestrator | Thursday 09 April 2026 04:41:00 +0000 (0:00:03.737) 0:05:17.865 ******** 2026-04-09 04:41:13.122966 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:41:13.122994 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:41:13.123005 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:41:13.123015 | orchestrator | 2026-04-09 04:41:13.123025 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-09 04:41:13.123035 | orchestrator | Thursday 09 April 2026 04:41:04 +0000 (0:00:03.287) 0:05:21.153 ******** 2026-04-09 04:41:13.123044 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:41:13.123054 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:41:13.123072 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:41:13.123082 | orchestrator | 2026-04-09 04:41:13.123092 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-09 04:41:13.123102 | orchestrator | Thursday 09 April 2026 04:41:06 +0000 (0:00:02.516) 0:05:23.670 ******** 2026-04-09 04:41:13.123112 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:41:13.123121 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:41:13.123131 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:41:13.123142 | orchestrator | 2026-04-09 04:41:13.123159 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-09 04:41:13.123171 | orchestrator | Thursday 09 April 2026 04:41:07 +0000 (0:00:01.445) 0:05:25.115 ******** 2026-04-09 04:41:13.123183 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:41:13.123195 | orchestrator | 2026-04-09 04:41:13.123206 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-09 04:41:13.123218 | orchestrator | Thursday 09 April 2026 04:41:09 +0000 (0:00:01.919) 0:05:27.035 ******** 2026-04-09 04:41:13.123230 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-09 04:41:13.123243 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-09 04:41:13.123255 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-09 04:41:13.123267 | orchestrator | 2026-04-09 04:41:13.123279 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-09 04:41:13.123292 | orchestrator | Thursday 09 April 2026 04:41:13 +0000 (0:00:03.115) 0:05:30.151 ******** 2026-04-09 04:41:13.123311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-09 04:41:28.382795 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:41:28.382931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-09 04:41:28.382953 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:41:28.382966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-09 04:41:28.382978 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:41:28.382990 | orchestrator | 2026-04-09 04:41:28.383003 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-09 04:41:28.383016 | orchestrator | Thursday 09 April 2026 04:41:14 +0000 (0:00:01.431) 0:05:31.582 ******** 2026-04-09 04:41:28.383028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-09 04:41:28.383042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-09 04:41:28.383053 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:41:28.383065 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:41:28.383076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-09 04:41:28.383088 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:41:28.383099 | orchestrator | 2026-04-09 04:41:28.383111 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-09 04:41:28.383123 | orchestrator | Thursday 09 April 2026 04:41:16 +0000 (0:00:01.711) 0:05:33.294 ******** 2026-04-09 04:41:28.383134 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:41:28.383145 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:41:28.383180 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:41:28.383192 | orchestrator | 2026-04-09 04:41:28.383203 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-09 04:41:28.383215 | orchestrator | Thursday 09 April 2026 04:41:17 +0000 (0:00:01.527) 0:05:34.821 ******** 2026-04-09 04:41:28.383226 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:41:28.383237 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:41:28.383248 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:41:28.383259 | orchestrator | 2026-04-09 04:41:28.383270 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-09 04:41:28.383281 | orchestrator | Thursday 09 April 2026 04:41:20 +0000 (0:00:02.357) 0:05:37.179 ******** 2026-04-09 04:41:28.383292 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:41:28.383303 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:41:28.383314 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:41:28.383325 | orchestrator | 2026-04-09 04:41:28.383335 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-09 04:41:28.383347 | orchestrator | Thursday 09 April 2026 04:41:21 +0000 (0:00:01.559) 0:05:38.739 ******** 2026-04-09 04:41:28.383357 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:41:28.383368 | orchestrator | 2026-04-09 04:41:28.383380 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-09 04:41:28.383391 | orchestrator | Thursday 09 April 2026 04:41:23 +0000 (0:00:02.358) 0:05:41.097 ******** 2026-04-09 04:41:28.383429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:41:28.383446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:41:28.383459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 04:41:28.383482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 04:41:28.383507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-09 04:41:28.493837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-09 04:41:28.493940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-09 04:41:28.493979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-09 04:41:28.493992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 04:41:28.494075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 04:41:28.494091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 04:41:28.494103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 04:41:28.494118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 04:41:28.494147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 04:41:28.494221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 04:41:28.494235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 04:41:28.494260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 04:41:28.559846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 04:41:28.559973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:41:28.560014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 04:41:28.560027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 04:41:28.560051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 04:41:28.560081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-09 04:41:28.560093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-09 04:41:28.560114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 04:41:28.560125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 04:41:28.560136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-09 04:41:28.560153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 04:41:28.560171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 04:41:28.700260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 04:41:28.700383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 04:41:28.700400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-09 04:41:28.700428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 04:41:28.700459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 04:41:28.700481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 04:41:28.700493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 04:41:28.700506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 04:41:28.700518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 04:41:28.700535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 04:41:28.700554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 04:41:31.417264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-09 04:41:31.417390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 04:41:31.417409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 04:41:31.417425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 04:41:31.417501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 04:41:31.417517 | orchestrator | 2026-04-09 04:41:31.417530 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-09 04:41:31.417543 | orchestrator | Thursday 09 April 2026 04:41:29 +0000 (0:00:05.976) 0:05:47.074 ******** 2026-04-09 04:41:31.417574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:41:31.417611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 04:41:31.417681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-09 04:41:31.417702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-09 04:41:31.417724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 04:41:31.526209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 04:41:31.526308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 04:41:31.526325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 04:41:31.526340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:41:31.526376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 04:41:31.526428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 04:41:31.526441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 04:41:31.526481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-09 04:41:31.526495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-09 04:41:31.526511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-09 04:41:31.526529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 04:41:31.526547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 04:41:31.592333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 04:41:31.592431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:41:31.592448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 04:41:31.592478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 04:41:31.592512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 04:41:31.592544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 04:41:31.592557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 04:41:31.592569 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:41:31.592583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-09 04:41:31.592601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 04:41:31.592688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-09 04:41:31.592714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 04:41:31.686464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 04:41:31.686536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 04:41:31.686543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 04:41:31.686574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-09 04:41:31.686582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 04:41:31.686588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 04:41:31.686607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 04:41:31.686614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 04:41:31.686656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 04:41:31.686682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 04:41:31.686690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 04:41:31.686695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 04:41:31.686706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-09 04:41:47.477889 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:41:47.478006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 04:41:47.478096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 04:41:47.478160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 04:41:47.478179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 04:41:47.478192 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:41:47.478204 | orchestrator | 2026-04-09 04:41:47.478217 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-09 04:41:47.478229 | orchestrator | Thursday 09 April 2026 04:41:33 +0000 (0:00:03.141) 0:05:50.215 ******** 2026-04-09 04:41:47.478241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:41:47.478256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:41:47.478269 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:41:47.478281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:41:47.478310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:41:47.478323 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:41:47.478334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:41:47.478346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:41:47.478365 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:41:47.478376 | orchestrator | 2026-04-09 04:41:47.478388 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-09 04:41:47.478399 | orchestrator | Thursday 09 April 2026 04:41:35 +0000 (0:00:02.923) 0:05:53.139 ******** 2026-04-09 04:41:47.478410 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:41:47.478423 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:41:47.478436 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:41:47.478449 | orchestrator | 2026-04-09 04:41:47.478462 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-09 04:41:47.478475 | orchestrator | Thursday 09 April 2026 04:41:38 +0000 (0:00:02.224) 0:05:55.364 ******** 2026-04-09 04:41:47.478488 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:41:47.478501 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:41:47.478513 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:41:47.478526 | orchestrator | 2026-04-09 04:41:47.478539 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-09 04:41:47.478553 | orchestrator | Thursday 09 April 2026 04:41:41 +0000 (0:00:03.141) 0:05:58.506 ******** 2026-04-09 04:41:47.478571 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:41:47.478590 | orchestrator | 2026-04-09 04:41:47.478613 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-09 04:41:47.478648 | orchestrator | Thursday 09 April 2026 04:41:43 +0000 (0:00:02.447) 0:06:00.953 ******** 2026-04-09 04:41:47.478668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 04:41:47.478689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 04:41:47.478768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 04:42:04.071185 | orchestrator | 2026-04-09 04:42:04.071301 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-09 04:42:04.071329 | orchestrator | Thursday 09 April 2026 04:41:48 +0000 (0:00:04.825) 0:06:05.778 ******** 2026-04-09 04:42:04.071354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 04:42:04.071380 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:42:04.071527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 04:42:04.071564 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:42:04.071578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 04:42:04.071609 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:42:04.071622 | orchestrator | 2026-04-09 04:42:04.071634 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-09 04:42:04.071645 | orchestrator | Thursday 09 April 2026 04:41:50 +0000 (0:00:02.159) 0:06:07.938 ******** 2026-04-09 04:42:04.071658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 04:42:04.071692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 04:42:04.071706 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:42:04.071718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 04:42:04.071731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 04:42:04.071745 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:42:04.071764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 04:42:04.071778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 04:42:04.071791 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:42:04.071831 | orchestrator | 2026-04-09 04:42:04.071846 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-09 04:42:04.071859 | orchestrator | Thursday 09 April 2026 04:41:52 +0000 (0:00:01.735) 0:06:09.674 ******** 2026-04-09 04:42:04.071873 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:42:04.071886 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:42:04.071899 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:42:04.071912 | orchestrator | 2026-04-09 04:42:04.071925 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-09 04:42:04.071938 | orchestrator | Thursday 09 April 2026 04:41:54 +0000 (0:00:02.239) 0:06:11.913 ******** 2026-04-09 04:42:04.071951 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:42:04.071964 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:42:04.071977 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:42:04.071990 | orchestrator | 2026-04-09 04:42:04.072004 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-09 04:42:04.072017 | orchestrator | Thursday 09 April 2026 04:41:57 +0000 (0:00:03.107) 0:06:15.021 ******** 2026-04-09 04:42:04.072030 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:42:04.072052 | orchestrator | 2026-04-09 04:42:04.072065 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-09 04:42:04.072079 | orchestrator | Thursday 09 April 2026 04:42:00 +0000 (0:00:02.457) 0:06:17.478 ******** 2026-04-09 04:42:04.072092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:42:04.072113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:42:07.280623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:42:07.280731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:42:07.280772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 04:42:07.280785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 04:42:07.280818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:42:07.280904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 04:42:07.280917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 04:42:07.280937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:42:07.280950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 04:42:07.280962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 04:42:07.280974 | orchestrator | 2026-04-09 04:42:07.280987 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-09 04:42:07.281007 | orchestrator | Thursday 09 April 2026 04:42:07 +0000 (0:00:06.945) 0:06:24.423 ******** 2026-04-09 04:42:08.466972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:42:08.467082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:42:08.467121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 04:42:08.467136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 04:42:08.467169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:42:08.467183 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:42:08.467203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:42:08.467223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 04:42:08.467235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 04:42:08.467247 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:42:08.467259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:42:08.467286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:42:29.691615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 04:42:29.691751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 04:42:29.691773 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:42:29.691787 | orchestrator | 2026-04-09 04:42:29.691800 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-09 04:42:29.691812 | orchestrator | Thursday 09 April 2026 04:42:09 +0000 (0:00:02.421) 0:06:26.845 ******** 2026-04-09 04:42:29.691824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:42:29.691839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:42:29.691852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:42:29.691865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:42:29.691876 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:42:29.691888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:42:29.691899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:42:29.691911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:42:29.691923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:42:29.691934 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:42:29.692040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:42:29.692073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:42:29.692085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:42:29.692097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:42:29.692108 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:42:29.692119 | orchestrator | 2026-04-09 04:42:29.692131 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-09 04:42:29.692142 | orchestrator | Thursday 09 April 2026 04:42:11 +0000 (0:00:01.990) 0:06:28.835 ******** 2026-04-09 04:42:29.692153 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:42:29.692167 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:42:29.692180 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:42:29.692191 | orchestrator | 2026-04-09 04:42:29.692204 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-09 04:42:29.692217 | orchestrator | Thursday 09 April 2026 04:42:14 +0000 (0:00:02.322) 0:06:31.158 ******** 2026-04-09 04:42:29.692229 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:42:29.692241 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:42:29.692253 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:42:29.692266 | orchestrator | 2026-04-09 04:42:29.692278 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-09 04:42:29.692291 | orchestrator | Thursday 09 April 2026 04:42:17 +0000 (0:00:03.652) 0:06:34.810 ******** 2026-04-09 04:42:29.692304 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:42:29.692316 | orchestrator | 2026-04-09 04:42:29.692329 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-09 04:42:29.692342 | orchestrator | Thursday 09 April 2026 04:42:20 +0000 (0:00:02.622) 0:06:37.432 ******** 2026-04-09 04:42:29.692355 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-09 04:42:29.692370 | orchestrator | 2026-04-09 04:42:29.692382 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-09 04:42:29.692395 | orchestrator | Thursday 09 April 2026 04:42:22 +0000 (0:00:02.491) 0:06:39.924 ******** 2026-04-09 04:42:29.692410 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-09 04:42:29.692424 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-09 04:42:29.692442 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-09 04:42:29.692454 | orchestrator | 2026-04-09 04:42:29.692466 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-09 04:42:29.692478 | orchestrator | Thursday 09 April 2026 04:42:28 +0000 (0:00:05.488) 0:06:45.412 ******** 2026-04-09 04:42:29.692496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 04:42:29.692514 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:42:55.754127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 04:42:55.754276 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:42:55.754307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 04:42:55.754329 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:42:55.754350 | orchestrator | 2026-04-09 04:42:55.754372 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-09 04:42:55.754394 | orchestrator | Thursday 09 April 2026 04:42:30 +0000 (0:00:02.680) 0:06:48.093 ******** 2026-04-09 04:42:55.754416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 04:42:55.754442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 04:42:55.754464 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:42:55.754486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 04:42:55.754507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 04:42:55.754563 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:42:55.754587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 04:42:55.754610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 04:42:55.754634 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:42:55.754657 | orchestrator | 2026-04-09 04:42:55.754682 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-09 04:42:55.754706 | orchestrator | Thursday 09 April 2026 04:42:33 +0000 (0:00:02.906) 0:06:50.999 ******** 2026-04-09 04:42:55.754728 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:42:55.754752 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:42:55.754776 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:42:55.754799 | orchestrator | 2026-04-09 04:42:55.754822 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-09 04:42:55.754843 | orchestrator | Thursday 09 April 2026 04:42:37 +0000 (0:00:03.900) 0:06:54.900 ******** 2026-04-09 04:42:55.754866 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:42:55.754890 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:42:55.754911 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:42:55.754932 | orchestrator | 2026-04-09 04:42:55.754953 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-09 04:42:55.754974 | orchestrator | Thursday 09 April 2026 04:42:42 +0000 (0:00:04.270) 0:06:59.171 ******** 2026-04-09 04:42:55.754995 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-09 04:42:55.755017 | orchestrator | 2026-04-09 04:42:55.755057 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-09 04:42:55.755080 | orchestrator | Thursday 09 April 2026 04:42:44 +0000 (0:00:01.980) 0:07:01.151 ******** 2026-04-09 04:42:55.755212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 04:42:55.755236 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:42:55.755257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 04:42:55.755277 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:42:55.755295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 04:42:55.755336 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:42:55.755357 | orchestrator | 2026-04-09 04:42:55.755378 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-09 04:42:55.755399 | orchestrator | Thursday 09 April 2026 04:42:46 +0000 (0:00:02.778) 0:07:03.929 ******** 2026-04-09 04:42:55.755418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 04:42:55.755437 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:42:55.755455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 04:42:55.755473 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:42:55.755492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 04:42:55.755511 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:42:55.755530 | orchestrator | 2026-04-09 04:42:55.755548 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-09 04:42:55.755567 | orchestrator | Thursday 09 April 2026 04:42:49 +0000 (0:00:02.497) 0:07:06.427 ******** 2026-04-09 04:42:55.755580 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:42:55.755592 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:42:55.755602 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:42:55.755613 | orchestrator | 2026-04-09 04:42:55.755624 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-09 04:42:55.755645 | orchestrator | Thursday 09 April 2026 04:42:52 +0000 (0:00:02.882) 0:07:09.309 ******** 2026-04-09 04:42:55.755656 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:42:55.755668 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:42:55.755679 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:42:55.755690 | orchestrator | 2026-04-09 04:42:55.755701 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-09 04:42:55.755712 | orchestrator | Thursday 09 April 2026 04:42:55 +0000 (0:00:03.584) 0:07:12.894 ******** 2026-04-09 04:43:23.697135 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:43:23.697322 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:43:23.697338 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:43:23.697348 | orchestrator | 2026-04-09 04:43:23.697357 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-09 04:43:23.697367 | orchestrator | Thursday 09 April 2026 04:42:59 +0000 (0:00:04.191) 0:07:17.086 ******** 2026-04-09 04:43:23.697376 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-09 04:43:23.697385 | orchestrator | 2026-04-09 04:43:23.697394 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-09 04:43:23.697422 | orchestrator | Thursday 09 April 2026 04:43:01 +0000 (0:00:01.669) 0:07:18.755 ******** 2026-04-09 04:43:23.697434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 04:43:23.697446 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:43:23.697455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 04:43:23.697464 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:43:23.697472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 04:43:23.697480 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:43:23.697488 | orchestrator | 2026-04-09 04:43:23.697496 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-09 04:43:23.697505 | orchestrator | Thursday 09 April 2026 04:43:04 +0000 (0:00:02.809) 0:07:21.564 ******** 2026-04-09 04:43:23.697514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 04:43:23.697522 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:43:23.697530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 04:43:23.697539 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:43:23.697576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 04:43:23.697592 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:43:23.697600 | orchestrator | 2026-04-09 04:43:23.697608 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-09 04:43:23.697616 | orchestrator | Thursday 09 April 2026 04:43:06 +0000 (0:00:02.491) 0:07:24.056 ******** 2026-04-09 04:43:23.697624 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:43:23.697632 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:43:23.697641 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:43:23.697650 | orchestrator | 2026-04-09 04:43:23.697660 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-09 04:43:23.697670 | orchestrator | Thursday 09 April 2026 04:43:09 +0000 (0:00:02.504) 0:07:26.561 ******** 2026-04-09 04:43:23.697679 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:43:23.697689 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:43:23.697698 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:43:23.697708 | orchestrator | 2026-04-09 04:43:23.697718 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-09 04:43:23.697727 | orchestrator | Thursday 09 April 2026 04:43:13 +0000 (0:00:03.859) 0:07:30.421 ******** 2026-04-09 04:43:23.697737 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:43:23.697746 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:43:23.697755 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:43:23.697765 | orchestrator | 2026-04-09 04:43:23.697774 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-09 04:43:23.697784 | orchestrator | Thursday 09 April 2026 04:43:17 +0000 (0:00:04.431) 0:07:34.853 ******** 2026-04-09 04:43:23.697793 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:43:23.697803 | orchestrator | 2026-04-09 04:43:23.697812 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-09 04:43:23.697822 | orchestrator | Thursday 09 April 2026 04:43:19 +0000 (0:00:02.217) 0:07:37.070 ******** 2026-04-09 04:43:23.697832 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 04:43:23.697844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 04:43:23.697856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 04:43:23.697881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 04:43:25.548567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 04:43:25.548677 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 04:43:25.548694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 04:43:25.548708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 04:43:25.548721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 04:43:25.548791 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 04:43:25.548807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 04:43:25.548819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 04:43:25.548830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 04:43:25.548842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 04:43:25.548854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 04:43:25.548874 | orchestrator | 2026-04-09 04:43:25.548888 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-09 04:43:25.548900 | orchestrator | Thursday 09 April 2026 04:43:25 +0000 (0:00:05.197) 0:07:42.268 ******** 2026-04-09 04:43:25.548925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 04:43:25.859852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 04:43:25.859948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 04:43:25.859964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 04:43:25.859975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 04:43:25.860009 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:43:25.860035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 04:43:25.860050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 04:43:25.860079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 04:43:25.860090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 04:43:25.860101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 04:43:25.860111 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:43:25.860129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 04:43:25.860145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 04:43:25.860162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 04:43:43.938463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 04:43:43.938613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 04:43:43.938634 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:43:43.938648 | orchestrator | 2026-04-09 04:43:43.938661 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-09 04:43:43.938673 | orchestrator | Thursday 09 April 2026 04:43:27 +0000 (0:00:01.885) 0:07:44.153 ******** 2026-04-09 04:43:43.938686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 04:43:43.938722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 04:43:43.938736 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:43:43.938747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 04:43:43.938759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 04:43:43.938770 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:43:43.938780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 04:43:43.938792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 04:43:43.938803 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:43:43.938814 | orchestrator | 2026-04-09 04:43:43.938825 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-09 04:43:43.938836 | orchestrator | Thursday 09 April 2026 04:43:28 +0000 (0:00:01.821) 0:07:45.975 ******** 2026-04-09 04:43:43.938846 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:43:43.938858 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:43:43.938868 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:43:43.938879 | orchestrator | 2026-04-09 04:43:43.938890 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-09 04:43:43.938903 | orchestrator | Thursday 09 April 2026 04:43:31 +0000 (0:00:02.772) 0:07:48.748 ******** 2026-04-09 04:43:43.938916 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:43:43.938928 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:43:43.938941 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:43:43.938954 | orchestrator | 2026-04-09 04:43:43.938966 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-09 04:43:43.938978 | orchestrator | Thursday 09 April 2026 04:43:35 +0000 (0:00:03.902) 0:07:52.650 ******** 2026-04-09 04:43:43.938991 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:43:43.939004 | orchestrator | 2026-04-09 04:43:43.939016 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-09 04:43:43.939029 | orchestrator | Thursday 09 April 2026 04:43:37 +0000 (0:00:02.232) 0:07:54.884 ******** 2026-04-09 04:43:43.939063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:43:43.939081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:43:43.939141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:43:43.939164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 04:43:43.939195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 04:43:47.327435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 04:43:47.327547 | orchestrator | 2026-04-09 04:43:47.327566 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-09 04:43:47.327579 | orchestrator | Thursday 09 April 2026 04:43:45 +0000 (0:00:07.396) 0:08:02.280 ******** 2026-04-09 04:43:47.327592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:43:47.327624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 04:43:47.327638 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:43:47.327669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:43:47.327704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 04:43:47.327717 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:43:47.327729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:43:47.327747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 04:43:47.327759 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:43:47.327777 | orchestrator | 2026-04-09 04:43:47.327789 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-09 04:43:47.327800 | orchestrator | Thursday 09 April 2026 04:43:46 +0000 (0:00:01.757) 0:08:04.038 ******** 2026-04-09 04:43:47.327813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:43:47.327834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-09 04:43:57.871966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-09 04:43:57.872085 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:43:57.872104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:43:57.872118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-09 04:43:57.872131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-09 04:43:57.872142 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:43:57.872154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:43:57.872166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-09 04:43:57.872178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-09 04:43:57.872190 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:43:57.872201 | orchestrator | 2026-04-09 04:43:57.872229 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-09 04:43:57.872243 | orchestrator | Thursday 09 April 2026 04:43:49 +0000 (0:00:02.157) 0:08:06.196 ******** 2026-04-09 04:43:57.872254 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:43:57.872265 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:43:57.872277 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:43:57.872288 | orchestrator | 2026-04-09 04:43:57.872299 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-09 04:43:57.872310 | orchestrator | Thursday 09 April 2026 04:43:50 +0000 (0:00:01.582) 0:08:07.778 ******** 2026-04-09 04:43:57.872322 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:43:57.872333 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:43:57.872367 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:43:57.872379 | orchestrator | 2026-04-09 04:43:57.872390 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-09 04:43:57.872401 | orchestrator | Thursday 09 April 2026 04:43:53 +0000 (0:00:02.385) 0:08:10.163 ******** 2026-04-09 04:43:57.872461 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:43:57.872474 | orchestrator | 2026-04-09 04:43:57.872485 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-09 04:43:57.872496 | orchestrator | Thursday 09 April 2026 04:43:55 +0000 (0:00:02.688) 0:08:12.852 ******** 2026-04-09 04:43:57.872534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-09 04:43:57.872554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-09 04:43:57.872569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 04:43:57.872590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:43:57.872612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 04:43:57.872627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:43:57.872640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:43:57.872662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 04:44:00.035683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:44:00.035773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 04:44:00.035801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-09 04:44:00.035829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 04:44:00.035838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:44:00.035844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:44:00.035868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 04:44:00.035875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:44:00.035885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:44:00.035897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-09 04:44:00.035911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-09 04:44:01.847331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:44:01.847423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:44:01.847488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:44:01.847521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:44:01.847534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:44:01.847544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 04:44:01.847569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-09 04:44:01.847580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 04:44:01.847604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:44:01.847614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:44:01.847624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 04:44:01.847633 | orchestrator | 2026-04-09 04:44:01.847645 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-09 04:44:01.847655 | orchestrator | Thursday 09 April 2026 04:44:01 +0000 (0:00:05.652) 0:08:18.505 ******** 2026-04-09 04:44:01.847665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-09 04:44:01.847682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 04:44:02.146233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:44:02.146370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:44:02.146416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 04:44:02.146519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:44:02.146545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-09 04:44:02.146593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:44:02.146617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:44:02.146660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 04:44:02.146681 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:44:02.146695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-09 04:44:02.146708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 04:44:02.146720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:44:02.146732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:44:02.146755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-09 04:44:02.262529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 04:44:02.262636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 04:44:02.262656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:44:02.262673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:44:02.262686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-09 04:44:02.262741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:44:02.262762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:44:02.262775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 04:44:02.262788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:44:02.262801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:44:02.262813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-09 04:44:02.262842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 04:44:15.491887 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:44:15.492001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:44:15.492019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 04:44:15.492028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 04:44:15.492037 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:44:15.492046 | orchestrator | 2026-04-09 04:44:15.492055 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-09 04:44:15.492064 | orchestrator | Thursday 09 April 2026 04:44:03 +0000 (0:00:02.108) 0:08:20.613 ******** 2026-04-09 04:44:15.492075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-09 04:44:15.492086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-09 04:44:15.492113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:44:15.492122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:44:15.492131 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:44:15.492140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-09 04:44:15.492148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-09 04:44:15.492175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:44:15.492184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:44:15.492192 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:44:15.492201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-09 04:44:15.492209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-09 04:44:15.492218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:44:15.492226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-09 04:44:15.492243 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:44:15.492251 | orchestrator | 2026-04-09 04:44:15.492260 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-09 04:44:15.492268 | orchestrator | Thursday 09 April 2026 04:44:06 +0000 (0:00:02.779) 0:08:23.393 ******** 2026-04-09 04:44:15.492276 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:44:15.492284 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:44:15.492292 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:44:15.492300 | orchestrator | 2026-04-09 04:44:15.492309 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-09 04:44:15.492317 | orchestrator | Thursday 09 April 2026 04:44:07 +0000 (0:00:01.548) 0:08:24.941 ******** 2026-04-09 04:44:15.492325 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:44:15.492333 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:44:15.492341 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:44:15.492349 | orchestrator | 2026-04-09 04:44:15.492357 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-09 04:44:15.492365 | orchestrator | Thursday 09 April 2026 04:44:10 +0000 (0:00:02.355) 0:08:27.297 ******** 2026-04-09 04:44:15.492373 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:44:15.492381 | orchestrator | 2026-04-09 04:44:15.492389 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-09 04:44:15.492397 | orchestrator | Thursday 09 April 2026 04:44:13 +0000 (0:00:02.889) 0:08:30.186 ******** 2026-04-09 04:44:15.492412 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 04:44:31.419167 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 04:44:31.419276 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 04:44:31.419313 | orchestrator | 2026-04-09 04:44:31.419328 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-09 04:44:31.419340 | orchestrator | Thursday 09 April 2026 04:44:16 +0000 (0:00:03.844) 0:08:34.031 ******** 2026-04-09 04:44:31.419352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 04:44:31.419365 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:44:31.419401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 04:44:31.419415 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:44:31.419426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 04:44:31.419446 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:44:31.419458 | orchestrator | 2026-04-09 04:44:31.419470 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-09 04:44:31.419481 | orchestrator | Thursday 09 April 2026 04:44:18 +0000 (0:00:01.558) 0:08:35.589 ******** 2026-04-09 04:44:31.419493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-09 04:44:31.419505 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:44:31.419517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-09 04:44:31.419528 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:44:31.419539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-09 04:44:31.419550 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:44:31.419561 | orchestrator | 2026-04-09 04:44:31.419572 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-09 04:44:31.419632 | orchestrator | Thursday 09 April 2026 04:44:20 +0000 (0:00:02.065) 0:08:37.655 ******** 2026-04-09 04:44:31.419644 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:44:31.419655 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:44:31.419666 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:44:31.419679 | orchestrator | 2026-04-09 04:44:31.419692 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-09 04:44:31.419705 | orchestrator | Thursday 09 April 2026 04:44:22 +0000 (0:00:01.526) 0:08:39.181 ******** 2026-04-09 04:44:31.419717 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:44:31.419730 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:44:31.419744 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:44:31.419756 | orchestrator | 2026-04-09 04:44:31.419768 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-09 04:44:31.419782 | orchestrator | Thursday 09 April 2026 04:44:24 +0000 (0:00:02.500) 0:08:41.682 ******** 2026-04-09 04:44:31.419795 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:44:31.419808 | orchestrator | 2026-04-09 04:44:31.419820 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-09 04:44:31.419833 | orchestrator | Thursday 09 April 2026 04:44:27 +0000 (0:00:02.837) 0:08:44.519 ******** 2026-04-09 04:44:31.419848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-09 04:44:31.419878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-09 04:44:36.562818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-09 04:44:36.562927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 04:44:36.562944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 04:44:36.562990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 04:44:36.563025 | orchestrator | 2026-04-09 04:44:36.563039 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-09 04:44:36.563051 | orchestrator | Thursday 09 April 2026 04:44:36 +0000 (0:00:08.696) 0:08:53.215 ******** 2026-04-09 04:44:36.563066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-09 04:44:36.563079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 04:44:36.563092 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:44:36.563106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-09 04:44:36.563162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 04:44:58.252416 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:44:58.252556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-09 04:44:58.252572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 04:44:58.252579 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:44:58.252585 | orchestrator | 2026-04-09 04:44:58.252592 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-09 04:44:58.252600 | orchestrator | Thursday 09 April 2026 04:44:38 +0000 (0:00:02.224) 0:08:55.440 ******** 2026-04-09 04:44:58.252609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-09 04:44:58.252640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-09 04:44:58.252648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 04:44:58.252655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 04:44:58.252661 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:44:58.252667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-09 04:44:58.252673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-09 04:44:58.252693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 04:44:58.252699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 04:44:58.252779 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:44:58.252786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-09 04:44:58.252792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-09 04:44:58.252798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 04:44:58.252803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 04:44:58.252809 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:44:58.252815 | orchestrator | 2026-04-09 04:44:58.252821 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-09 04:44:58.252826 | orchestrator | Thursday 09 April 2026 04:44:40 +0000 (0:00:02.379) 0:08:57.820 ******** 2026-04-09 04:44:58.252839 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:44:58.252845 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:44:58.252851 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:44:58.252857 | orchestrator | 2026-04-09 04:44:58.252863 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-09 04:44:58.252868 | orchestrator | Thursday 09 April 2026 04:44:43 +0000 (0:00:02.391) 0:09:00.211 ******** 2026-04-09 04:44:58.252874 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:44:58.252880 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:44:58.252885 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:44:58.252891 | orchestrator | 2026-04-09 04:44:58.252896 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-09 04:44:58.252902 | orchestrator | Thursday 09 April 2026 04:44:46 +0000 (0:00:03.131) 0:09:03.343 ******** 2026-04-09 04:44:58.252907 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:44:58.252913 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:44:58.252919 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:44:58.252925 | orchestrator | 2026-04-09 04:44:58.252932 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-09 04:44:58.252938 | orchestrator | Thursday 09 April 2026 04:44:48 +0000 (0:00:01.882) 0:09:05.225 ******** 2026-04-09 04:44:58.252943 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:44:58.252949 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:44:58.252954 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:44:58.252960 | orchestrator | 2026-04-09 04:44:58.252971 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-09 04:44:58.252977 | orchestrator | Thursday 09 April 2026 04:44:49 +0000 (0:00:01.425) 0:09:06.651 ******** 2026-04-09 04:44:58.252984 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:44:58.252990 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:44:58.252996 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:44:58.253002 | orchestrator | 2026-04-09 04:44:58.253008 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-09 04:44:58.253015 | orchestrator | Thursday 09 April 2026 04:44:50 +0000 (0:00:01.396) 0:09:08.047 ******** 2026-04-09 04:44:58.253020 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:44:58.253026 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:44:58.253032 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:44:58.253038 | orchestrator | 2026-04-09 04:44:58.253044 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-09 04:44:58.253050 | orchestrator | Thursday 09 April 2026 04:44:52 +0000 (0:00:01.516) 0:09:09.564 ******** 2026-04-09 04:44:58.253057 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:44:58.253063 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:44:58.253069 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:44:58.253075 | orchestrator | 2026-04-09 04:44:58.253082 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-04-09 04:44:58.253087 | orchestrator | Thursday 09 April 2026 04:44:54 +0000 (0:00:01.724) 0:09:11.288 ******** 2026-04-09 04:44:58.253094 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:44:58.253101 | orchestrator | 2026-04-09 04:44:58.253108 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-09 04:44:58.253114 | orchestrator | Thursday 09 April 2026 04:44:56 +0000 (0:00:02.463) 0:09:13.751 ******** 2026-04-09 04:44:58.253131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 04:45:03.052179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 04:45:03.052283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 04:45:03.052300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 04:45:03.052330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 04:45:03.052343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 04:45:03.052355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 04:45:03.052387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 04:45:03.052425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 04:45:03.052439 | orchestrator | 2026-04-09 04:45:03.052452 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-09 04:45:03.052465 | orchestrator | Thursday 09 April 2026 04:45:01 +0000 (0:00:04.533) 0:09:18.285 ******** 2026-04-09 04:45:03.052477 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 04:45:03.052490 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 04:45:03.052501 | orchestrator | } 2026-04-09 04:45:03.052513 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 04:45:03.052524 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 04:45:03.052535 | orchestrator | } 2026-04-09 04:45:03.052547 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 04:45:03.052558 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 04:45:03.052569 | orchestrator | } 2026-04-09 04:45:03.052581 | orchestrator | 2026-04-09 04:45:03.052593 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 04:45:03.052604 | orchestrator | Thursday 09 April 2026 04:45:02 +0000 (0:00:01.417) 0:09:19.703 ******** 2026-04-09 04:45:03.052616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 04:45:03.052634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 04:45:03.052646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 04:45:03.052658 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:45:03.052678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 04:45:03.052698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 04:47:06.945977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 04:47:06.946155 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:47:06.946175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 04:47:06.946205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 04:47:06.946218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 04:47:06.946230 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:47:06.946287 | orchestrator | 2026-04-09 04:47:06.946325 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-09 04:47:06.946339 | orchestrator | Thursday 09 April 2026 04:45:05 +0000 (0:00:02.748) 0:09:22.451 ******** 2026-04-09 04:47:06.946350 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:47:06.946362 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:47:06.946374 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:47:06.946384 | orchestrator | 2026-04-09 04:47:06.946396 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-09 04:47:06.946407 | orchestrator | Thursday 09 April 2026 04:45:07 +0000 (0:00:01.899) 0:09:24.351 ******** 2026-04-09 04:47:06.946418 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:47:06.946429 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:47:06.946441 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:47:06.946451 | orchestrator | 2026-04-09 04:47:06.946463 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-09 04:47:06.946477 | orchestrator | Thursday 09 April 2026 04:45:08 +0000 (0:00:01.410) 0:09:25.761 ******** 2026-04-09 04:47:06.946490 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:47:06.946504 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:47:06.946517 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:47:06.946533 | orchestrator | 2026-04-09 04:47:06.946547 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-09 04:47:06.946561 | orchestrator | Thursday 09 April 2026 04:45:15 +0000 (0:00:07.161) 0:09:32.922 ******** 2026-04-09 04:47:06.946574 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:47:06.946588 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:47:06.946600 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:47:06.946614 | orchestrator | 2026-04-09 04:47:06.946626 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-09 04:47:06.946640 | orchestrator | Thursday 09 April 2026 04:45:22 +0000 (0:00:07.133) 0:09:40.056 ******** 2026-04-09 04:47:06.946654 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:47:06.946667 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:47:06.946680 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:47:06.946692 | orchestrator | 2026-04-09 04:47:06.946703 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-09 04:47:06.946714 | orchestrator | Thursday 09 April 2026 04:45:29 +0000 (0:00:07.077) 0:09:47.134 ******** 2026-04-09 04:47:06.946725 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:47:06.946737 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:47:06.946748 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:47:06.946759 | orchestrator | 2026-04-09 04:47:06.946786 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-09 04:47:06.946799 | orchestrator | Thursday 09 April 2026 04:45:38 +0000 (0:00:08.257) 0:09:55.392 ******** 2026-04-09 04:47:06.946810 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:47:06.946821 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:47:06.946832 | orchestrator | 2026-04-09 04:47:06.946843 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-09 04:47:06.946855 | orchestrator | Thursday 09 April 2026 04:45:42 +0000 (0:00:03.838) 0:09:59.230 ******** 2026-04-09 04:47:06.946866 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:47:06.946877 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:47:06.946888 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:47:06.946900 | orchestrator | 2026-04-09 04:47:06.946911 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-09 04:47:06.946922 | orchestrator | Thursday 09 April 2026 04:45:55 +0000 (0:00:13.551) 0:10:12.782 ******** 2026-04-09 04:47:06.946933 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:47:06.946945 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:47:06.946956 | orchestrator | 2026-04-09 04:47:06.946968 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-09 04:47:06.946979 | orchestrator | Thursday 09 April 2026 04:45:59 +0000 (0:00:03.767) 0:10:16.549 ******** 2026-04-09 04:47:06.946990 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:47:06.947010 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:47:06.947022 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:47:06.947033 | orchestrator | 2026-04-09 04:47:06.947044 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-09 04:47:06.947055 | orchestrator | Thursday 09 April 2026 04:46:06 +0000 (0:00:07.489) 0:10:24.038 ******** 2026-04-09 04:47:06.947067 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:47:06.947078 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:47:06.947089 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:47:06.947100 | orchestrator | 2026-04-09 04:47:06.947111 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-09 04:47:06.947122 | orchestrator | Thursday 09 April 2026 04:46:13 +0000 (0:00:06.883) 0:10:30.921 ******** 2026-04-09 04:47:06.947134 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:47:06.947145 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:47:06.947156 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:47:06.947167 | orchestrator | 2026-04-09 04:47:06.947178 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-09 04:47:06.947189 | orchestrator | Thursday 09 April 2026 04:46:20 +0000 (0:00:06.952) 0:10:37.874 ******** 2026-04-09 04:47:06.947201 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:47:06.947219 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:47:06.947231 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:47:06.947260 | orchestrator | 2026-04-09 04:47:06.947271 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-09 04:47:06.947283 | orchestrator | Thursday 09 April 2026 04:46:27 +0000 (0:00:06.881) 0:10:44.755 ******** 2026-04-09 04:47:06.947294 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:47:06.947305 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:47:06.947316 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:47:06.947328 | orchestrator | 2026-04-09 04:47:06.947339 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-04-09 04:47:06.947350 | orchestrator | Thursday 09 April 2026 04:46:35 +0000 (0:00:07.434) 0:10:52.189 ******** 2026-04-09 04:47:06.947362 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:47:06.947373 | orchestrator | 2026-04-09 04:47:06.947385 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-09 04:47:06.947396 | orchestrator | Thursday 09 April 2026 04:46:38 +0000 (0:00:03.719) 0:10:55.909 ******** 2026-04-09 04:47:06.947407 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:47:06.947418 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:47:06.947430 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:47:06.947441 | orchestrator | 2026-04-09 04:47:06.947453 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-04-09 04:47:06.947464 | orchestrator | Thursday 09 April 2026 04:46:51 +0000 (0:00:13.093) 0:11:09.002 ******** 2026-04-09 04:47:06.947475 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:47:06.947486 | orchestrator | 2026-04-09 04:47:06.947498 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-09 04:47:06.947509 | orchestrator | Thursday 09 April 2026 04:46:56 +0000 (0:00:04.599) 0:11:13.601 ******** 2026-04-09 04:47:06.947520 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:47:06.947532 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:47:06.947543 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:47:06.947554 | orchestrator | 2026-04-09 04:47:06.947565 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-09 04:47:06.947577 | orchestrator | Thursday 09 April 2026 04:47:03 +0000 (0:00:07.039) 0:11:20.641 ******** 2026-04-09 04:47:06.947588 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:47:06.947599 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:47:06.947610 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:47:06.947621 | orchestrator | 2026-04-09 04:47:06.947632 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-09 04:47:06.947644 | orchestrator | Thursday 09 April 2026 04:47:06 +0000 (0:00:02.566) 0:11:23.207 ******** 2026-04-09 04:47:06.947662 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:47:06.947673 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:47:06.947684 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:47:06.947695 | orchestrator | 2026-04-09 04:47:06.947707 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 04:47:06.947719 | orchestrator | testbed-node-0 : ok=129  changed=30  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-09 04:47:06.947731 | orchestrator | testbed-node-1 : ok=128  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-09 04:47:06.947749 | orchestrator | testbed-node-2 : ok=128  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-09 04:47:09.344335 | orchestrator | 2026-04-09 04:47:09.344424 | orchestrator | 2026-04-09 04:47:09.344434 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 04:47:09.344441 | orchestrator | Thursday 09 April 2026 04:47:08 +0000 (0:00:02.423) 0:11:25.631 ******** 2026-04-09 04:47:09.344447 | orchestrator | =============================================================================== 2026-04-09 04:47:09.344452 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.55s 2026-04-09 04:47:09.344458 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 13.09s 2026-04-09 04:47:09.344463 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.70s 2026-04-09 04:47:09.344469 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.26s 2026-04-09 04:47:09.344474 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.49s 2026-04-09 04:47:09.344479 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 7.43s 2026-04-09 04:47:09.344484 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.40s 2026-04-09 04:47:09.344489 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 7.16s 2026-04-09 04:47:09.344494 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 7.13s 2026-04-09 04:47:09.344500 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 7.08s 2026-04-09 04:47:09.344505 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 7.04s 2026-04-09 04:47:09.344511 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 6.95s 2026-04-09 04:47:09.344516 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.95s 2026-04-09 04:47:09.344521 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 6.88s 2026-04-09 04:47:09.344526 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 6.88s 2026-04-09 04:47:09.344531 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.98s 2026-04-09 04:47:09.344536 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.93s 2026-04-09 04:47:09.344542 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.65s 2026-04-09 04:47:09.344561 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 5.51s 2026-04-09 04:47:09.344566 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.49s 2026-04-09 04:47:09.579721 | orchestrator | + osism apply -a upgrade opensearch 2026-04-09 04:47:10.902479 | orchestrator | 2026-04-09 04:47:10 | INFO  | Prepare task for execution of opensearch. 2026-04-09 04:47:10.969231 | orchestrator | 2026-04-09 04:47:10 | INFO  | Task 46ae30fd-966f-49c5-9be3-ab3cd9da494e (opensearch) was prepared for execution. 2026-04-09 04:47:10.969367 | orchestrator | 2026-04-09 04:47:10 | INFO  | It takes a moment until task 46ae30fd-966f-49c5-9be3-ab3cd9da494e (opensearch) has been started and output is visible here. 2026-04-09 04:47:29.757703 | orchestrator | 2026-04-09 04:47:29.757800 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 04:47:29.757812 | orchestrator | 2026-04-09 04:47:29.757821 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 04:47:29.757829 | orchestrator | Thursday 09 April 2026 04:47:15 +0000 (0:00:01.688) 0:00:01.688 ******** 2026-04-09 04:47:29.757837 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:47:29.757846 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:47:29.757854 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:47:29.757862 | orchestrator | 2026-04-09 04:47:29.757870 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 04:47:29.757878 | orchestrator | Thursday 09 April 2026 04:47:17 +0000 (0:00:01.705) 0:00:03.393 ******** 2026-04-09 04:47:29.757887 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-09 04:47:29.757896 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-09 04:47:29.757903 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-09 04:47:29.757911 | orchestrator | 2026-04-09 04:47:29.757919 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-09 04:47:29.757927 | orchestrator | 2026-04-09 04:47:29.757935 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-09 04:47:29.757943 | orchestrator | Thursday 09 April 2026 04:47:20 +0000 (0:00:02.381) 0:00:05.775 ******** 2026-04-09 04:47:29.757952 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:47:29.757960 | orchestrator | 2026-04-09 04:47:29.757968 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-09 04:47:29.757976 | orchestrator | Thursday 09 April 2026 04:47:23 +0000 (0:00:03.712) 0:00:09.487 ******** 2026-04-09 04:47:29.757984 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 04:47:29.757991 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 04:47:29.757999 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 04:47:29.758007 | orchestrator | 2026-04-09 04:47:29.758061 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-09 04:47:29.758072 | orchestrator | Thursday 09 April 2026 04:47:26 +0000 (0:00:02.715) 0:00:12.202 ******** 2026-04-09 04:47:29.758083 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:47:29.758095 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:47:29.758149 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:47:29.758162 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 04:47:29.758173 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 04:47:29.758187 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 04:47:29.758201 | orchestrator | 2026-04-09 04:47:29.758210 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-09 04:47:29.758218 | orchestrator | Thursday 09 April 2026 04:47:28 +0000 (0:00:02.337) 0:00:14.540 ******** 2026-04-09 04:47:29.758226 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:47:29.758235 | orchestrator | 2026-04-09 04:47:29.758248 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-09 04:47:34.888120 | orchestrator | Thursday 09 April 2026 04:47:30 +0000 (0:00:02.045) 0:00:16.585 ******** 2026-04-09 04:47:34.888226 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:47:34.888237 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:47:34.888243 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:47:34.888283 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 04:47:34.888305 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 04:47:34.888311 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 04:47:34.888317 | orchestrator | 2026-04-09 04:47:34.888324 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-09 04:47:34.888330 | orchestrator | Thursday 09 April 2026 04:47:34 +0000 (0:00:03.471) 0:00:20.057 ******** 2026-04-09 04:47:34.888336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:47:34.888384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 04:47:37.345455 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:47:37.345559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:47:37.345579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 04:47:37.345616 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:47:37.345640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:47:37.345670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 04:47:37.345683 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:47:37.345694 | orchestrator | 2026-04-09 04:47:37.345705 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-09 04:47:37.345716 | orchestrator | Thursday 09 April 2026 04:47:36 +0000 (0:00:02.107) 0:00:22.164 ******** 2026-04-09 04:47:37.345727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:47:37.345738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 04:47:37.345756 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:47:37.345771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:47:37.345791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 04:47:41.017478 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:47:41.017590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:47:41.017607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 04:47:41.017638 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:47:41.017648 | orchestrator | 2026-04-09 04:47:41.017659 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-09 04:47:41.017669 | orchestrator | Thursday 09 April 2026 04:47:38 +0000 (0:00:02.168) 0:00:24.333 ******** 2026-04-09 04:47:41.017692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:47:41.017717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:47:41.017727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:47:41.017744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 04:47:41.017760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 04:47:41.017779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 04:47:54.141400 | orchestrator | 2026-04-09 04:47:54.141582 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-09 04:47:54.141601 | orchestrator | Thursday 09 April 2026 04:47:42 +0000 (0:00:03.522) 0:00:27.855 ******** 2026-04-09 04:47:54.141613 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:47:54.141651 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:47:54.141664 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:47:54.141675 | orchestrator | 2026-04-09 04:47:54.141686 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-09 04:47:54.141697 | orchestrator | Thursday 09 April 2026 04:47:45 +0000 (0:00:03.625) 0:00:31.481 ******** 2026-04-09 04:47:54.141708 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:47:54.141723 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:47:54.141740 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:47:54.141767 | orchestrator | 2026-04-09 04:47:54.141786 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-04-09 04:47:54.141804 | orchestrator | Thursday 09 April 2026 04:47:49 +0000 (0:00:03.246) 0:00:34.728 ******** 2026-04-09 04:47:54.141825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:47:54.141866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:47:54.141885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 04:47:54.141932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 04:47:54.141973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 04:47:54.142006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 04:47:54.142107 | orchestrator | 2026-04-09 04:47:54.142123 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-04-09 04:47:54.142137 | orchestrator | Thursday 09 April 2026 04:47:52 +0000 (0:00:03.365) 0:00:38.094 ******** 2026-04-09 04:47:54.142151 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 04:47:54.142164 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 04:47:54.142177 | orchestrator | } 2026-04-09 04:47:54.142190 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 04:47:54.142203 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 04:47:54.142217 | orchestrator | } 2026-04-09 04:47:54.142229 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 04:47:54.142242 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 04:47:54.142255 | orchestrator | } 2026-04-09 04:47:54.142269 | orchestrator | 2026-04-09 04:47:54.142281 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 04:47:54.142303 | orchestrator | Thursday 09 April 2026 04:47:53 +0000 (0:00:01.348) 0:00:39.443 ******** 2026-04-09 04:47:54.142326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:51:14.648821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 04:51:14.648965 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:51:14.649054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:51:14.649073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 04:51:14.649109 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:51:14.649146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 04:51:14.649175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 04:51:14.649200 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:51:14.649219 | orchestrator | 2026-04-09 04:51:14.649238 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-09 04:51:14.649258 | orchestrator | Thursday 09 April 2026 04:47:56 +0000 (0:00:02.616) 0:00:42.059 ******** 2026-04-09 04:51:14.649276 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:51:14.649295 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:51:14.649312 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:51:14.649329 | orchestrator | 2026-04-09 04:51:14.649349 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-09 04:51:14.649381 | orchestrator | Thursday 09 April 2026 04:47:57 +0000 (0:00:01.362) 0:00:43.422 ******** 2026-04-09 04:51:14.649402 | orchestrator | 2026-04-09 04:51:14.649423 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-09 04:51:14.649444 | orchestrator | Thursday 09 April 2026 04:47:58 +0000 (0:00:00.452) 0:00:43.875 ******** 2026-04-09 04:51:14.649464 | orchestrator | 2026-04-09 04:51:14.649483 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-09 04:51:14.649503 | orchestrator | Thursday 09 April 2026 04:47:58 +0000 (0:00:00.460) 0:00:44.336 ******** 2026-04-09 04:51:14.649522 | orchestrator | 2026-04-09 04:51:14.649541 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-09 04:51:14.649562 | orchestrator | Thursday 09 April 2026 04:47:59 +0000 (0:00:00.820) 0:00:45.157 ******** 2026-04-09 04:51:14.649597 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:51:14.649618 | orchestrator | 2026-04-09 04:51:14.649636 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-09 04:51:14.649656 | orchestrator | Thursday 09 April 2026 04:48:03 +0000 (0:00:03.633) 0:00:48.790 ******** 2026-04-09 04:51:14.649674 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:51:14.649692 | orchestrator | 2026-04-09 04:51:14.649710 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-09 04:51:14.649729 | orchestrator | Thursday 09 April 2026 04:48:09 +0000 (0:00:06.031) 0:00:54.822 ******** 2026-04-09 04:51:14.649749 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:51:14.649768 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:51:14.649788 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:51:14.649808 | orchestrator | 2026-04-09 04:51:14.649827 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-09 04:51:14.649847 | orchestrator | Thursday 09 April 2026 04:49:22 +0000 (0:01:13.260) 0:02:08.082 ******** 2026-04-09 04:51:14.649866 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:51:14.649886 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:51:14.649904 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:51:14.649922 | orchestrator | 2026-04-09 04:51:14.649940 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-09 04:51:14.649959 | orchestrator | Thursday 09 April 2026 04:51:02 +0000 (0:01:40.025) 0:03:48.108 ******** 2026-04-09 04:51:14.649980 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:51:14.650106 | orchestrator | 2026-04-09 04:51:14.650126 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-09 04:51:14.650146 | orchestrator | Thursday 09 April 2026 04:51:04 +0000 (0:00:01.964) 0:03:50.073 ******** 2026-04-09 04:51:14.650164 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:51:14.650183 | orchestrator | 2026-04-09 04:51:14.650202 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-04-09 04:51:14.650221 | orchestrator | Thursday 09 April 2026 04:51:07 +0000 (0:00:03.371) 0:03:53.445 ******** 2026-04-09 04:51:14.650240 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:51:14.650258 | orchestrator | 2026-04-09 04:51:14.650277 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-09 04:51:14.650296 | orchestrator | Thursday 09 April 2026 04:51:11 +0000 (0:00:03.402) 0:03:56.848 ******** 2026-04-09 04:51:14.650315 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:51:14.650333 | orchestrator | 2026-04-09 04:51:14.650352 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-09 04:51:14.650389 | orchestrator | Thursday 09 April 2026 04:51:14 +0000 (0:00:03.476) 0:04:00.324 ******** 2026-04-09 04:51:18.043272 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:51:18.043368 | orchestrator | 2026-04-09 04:51:18.043382 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-09 04:51:18.043393 | orchestrator | Thursday 09 April 2026 04:51:15 +0000 (0:00:01.300) 0:04:01.625 ******** 2026-04-09 04:51:18.043401 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:51:18.043409 | orchestrator | 2026-04-09 04:51:18.043418 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 04:51:18.043428 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 04:51:18.043438 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 04:51:18.043446 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 04:51:18.043454 | orchestrator | 2026-04-09 04:51:18.043462 | orchestrator | 2026-04-09 04:51:18.043471 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 04:51:18.043502 | orchestrator | Thursday 09 April 2026 04:51:17 +0000 (0:00:01.686) 0:04:03.312 ******** 2026-04-09 04:51:18.043511 | orchestrator | =============================================================================== 2026-04-09 04:51:18.043519 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------ 100.03s 2026-04-09 04:51:18.043527 | orchestrator | opensearch : Restart opensearch container ------------------------------ 73.26s 2026-04-09 04:51:18.043535 | orchestrator | opensearch : Perform a flush -------------------------------------------- 6.03s 2026-04-09 04:51:18.043543 | orchestrator | opensearch : include_tasks ---------------------------------------------- 3.71s 2026-04-09 04:51:18.043551 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 3.63s 2026-04-09 04:51:18.043559 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.63s 2026-04-09 04:51:18.043568 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.52s 2026-04-09 04:51:18.043576 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.48s 2026-04-09 04:51:18.043597 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.47s 2026-04-09 04:51:18.043605 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 3.40s 2026-04-09 04:51:18.043613 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.37s 2026-04-09 04:51:18.043621 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 3.36s 2026-04-09 04:51:18.043630 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 3.25s 2026-04-09 04:51:18.043638 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.72s 2026-04-09 04:51:18.043646 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.62s 2026-04-09 04:51:18.043654 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.38s 2026-04-09 04:51:18.043662 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.34s 2026-04-09 04:51:18.043670 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 2.17s 2026-04-09 04:51:18.043679 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 2.11s 2026-04-09 04:51:18.043688 | orchestrator | opensearch : include_tasks ---------------------------------------------- 2.05s 2026-04-09 04:51:18.246529 | orchestrator | + osism apply -a upgrade memcached 2026-04-09 04:51:19.599146 | orchestrator | 2026-04-09 04:51:19 | INFO  | Prepare task for execution of memcached. 2026-04-09 04:51:19.665686 | orchestrator | 2026-04-09 04:51:19 | INFO  | Task d2715dbc-51c1-405d-9cf6-de1420195812 (memcached) was prepared for execution. 2026-04-09 04:51:19.665838 | orchestrator | 2026-04-09 04:51:19 | INFO  | It takes a moment until task d2715dbc-51c1-405d-9cf6-de1420195812 (memcached) has been started and output is visible here. 2026-04-09 04:51:53.945625 | orchestrator | 2026-04-09 04:51:53.945755 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 04:51:53.945775 | orchestrator | 2026-04-09 04:51:53.945787 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 04:51:53.945799 | orchestrator | Thursday 09 April 2026 04:51:24 +0000 (0:00:01.533) 0:00:01.533 ******** 2026-04-09 04:51:53.945810 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:51:53.945822 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:51:53.945833 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:51:53.945844 | orchestrator | 2026-04-09 04:51:53.945855 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 04:51:53.945866 | orchestrator | Thursday 09 April 2026 04:51:26 +0000 (0:00:02.035) 0:00:03.569 ******** 2026-04-09 04:51:53.945878 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-09 04:51:53.945890 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-09 04:51:53.945901 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-09 04:51:53.945938 | orchestrator | 2026-04-09 04:51:53.945950 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-09 04:51:53.945961 | orchestrator | 2026-04-09 04:51:53.945971 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-09 04:51:53.945982 | orchestrator | Thursday 09 April 2026 04:51:28 +0000 (0:00:01.661) 0:00:05.231 ******** 2026-04-09 04:51:53.945994 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:51:53.946006 | orchestrator | 2026-04-09 04:51:53.946083 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-09 04:51:53.946137 | orchestrator | Thursday 09 April 2026 04:51:32 +0000 (0:00:03.925) 0:00:09.156 ******** 2026-04-09 04:51:53.946149 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-04-09 04:51:53.946176 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-04-09 04:51:53.946190 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-04-09 04:51:53.946203 | orchestrator | 2026-04-09 04:51:53.946216 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-09 04:51:53.946230 | orchestrator | Thursday 09 April 2026 04:51:34 +0000 (0:00:02.470) 0:00:11.626 ******** 2026-04-09 04:51:53.946243 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-04-09 04:51:53.946254 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-04-09 04:51:53.946269 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-04-09 04:51:53.946288 | orchestrator | 2026-04-09 04:51:53.946306 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-04-09 04:51:53.946324 | orchestrator | Thursday 09 April 2026 04:51:37 +0000 (0:00:02.818) 0:00:14.445 ******** 2026-04-09 04:51:53.946345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-09 04:51:53.946390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-09 04:51:53.946435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-09 04:51:53.946460 | orchestrator | 2026-04-09 04:51:53.946472 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-04-09 04:51:53.946483 | orchestrator | Thursday 09 April 2026 04:51:39 +0000 (0:00:02.261) 0:00:16.706 ******** 2026-04-09 04:51:53.946494 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 04:51:53.946506 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 04:51:53.946517 | orchestrator | } 2026-04-09 04:51:53.946528 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 04:51:53.946539 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 04:51:53.946550 | orchestrator | } 2026-04-09 04:51:53.946561 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 04:51:53.946572 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 04:51:53.946583 | orchestrator | } 2026-04-09 04:51:53.946594 | orchestrator | 2026-04-09 04:51:53.946605 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 04:51:53.946615 | orchestrator | Thursday 09 April 2026 04:51:41 +0000 (0:00:01.440) 0:00:18.147 ******** 2026-04-09 04:51:53.946627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-09 04:51:53.946638 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:51:53.946650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-09 04:51:53.946661 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:51:53.946673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-09 04:51:53.946684 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:51:53.946695 | orchestrator | 2026-04-09 04:51:53.946706 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-09 04:51:53.946717 | orchestrator | Thursday 09 April 2026 04:51:43 +0000 (0:00:02.051) 0:00:20.199 ******** 2026-04-09 04:51:53.946728 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:51:53.946745 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:51:53.946756 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:51:53.946767 | orchestrator | 2026-04-09 04:51:53.946778 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 04:51:53.946790 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 04:51:53.946802 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 04:51:53.946813 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 04:51:53.946824 | orchestrator | 2026-04-09 04:51:53.946835 | orchestrator | 2026-04-09 04:51:53.946846 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 04:51:53.946865 | orchestrator | Thursday 09 April 2026 04:51:53 +0000 (0:00:10.757) 0:00:30.956 ******** 2026-04-09 04:51:54.359159 | orchestrator | =============================================================================== 2026-04-09 04:51:54.359325 | orchestrator | memcached : Restart memcached container -------------------------------- 10.76s 2026-04-09 04:51:54.359351 | orchestrator | memcached : include_tasks ----------------------------------------------- 3.92s 2026-04-09 04:51:54.359371 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.82s 2026-04-09 04:51:54.359390 | orchestrator | memcached : Ensuring config directories exist --------------------------- 2.47s 2026-04-09 04:51:54.359409 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.26s 2026-04-09 04:51:54.359429 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.05s 2026-04-09 04:51:54.359450 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.04s 2026-04-09 04:51:54.359471 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.66s 2026-04-09 04:51:54.359491 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.44s 2026-04-09 04:51:54.623650 | orchestrator | + osism apply -a upgrade redis 2026-04-09 04:51:56.059028 | orchestrator | 2026-04-09 04:51:56 | INFO  | Prepare task for execution of redis. 2026-04-09 04:51:56.126680 | orchestrator | 2026-04-09 04:51:56 | INFO  | Task 7b5c984d-7ec7-4136-b35b-cb85b23cb5f1 (redis) was prepared for execution. 2026-04-09 04:51:56.126804 | orchestrator | 2026-04-09 04:51:56 | INFO  | It takes a moment until task 7b5c984d-7ec7-4136-b35b-cb85b23cb5f1 (redis) has been started and output is visible here. 2026-04-09 04:52:15.155353 | orchestrator | 2026-04-09 04:52:15.155481 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 04:52:15.155499 | orchestrator | 2026-04-09 04:52:15.155512 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 04:52:15.155523 | orchestrator | Thursday 09 April 2026 04:52:01 +0000 (0:00:02.085) 0:00:02.085 ******** 2026-04-09 04:52:15.155535 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:52:15.155546 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:52:15.155557 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:52:15.155568 | orchestrator | 2026-04-09 04:52:15.155579 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 04:52:15.155591 | orchestrator | Thursday 09 April 2026 04:52:03 +0000 (0:00:02.032) 0:00:04.117 ******** 2026-04-09 04:52:15.155602 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-09 04:52:15.155613 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-09 04:52:15.155624 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-09 04:52:15.155635 | orchestrator | 2026-04-09 04:52:15.155647 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-09 04:52:15.155658 | orchestrator | 2026-04-09 04:52:15.155669 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-09 04:52:15.155702 | orchestrator | Thursday 09 April 2026 04:52:06 +0000 (0:00:02.544) 0:00:06.662 ******** 2026-04-09 04:52:15.155714 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:52:15.155726 | orchestrator | 2026-04-09 04:52:15.155737 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-09 04:52:15.155756 | orchestrator | Thursday 09 April 2026 04:52:10 +0000 (0:00:03.853) 0:00:10.515 ******** 2026-04-09 04:52:15.155785 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 04:52:15.155808 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 04:52:15.155829 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 04:52:15.155849 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 04:52:15.155892 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 04:52:15.155916 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 04:52:15.155950 | orchestrator | 2026-04-09 04:52:15.155964 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-09 04:52:15.155978 | orchestrator | Thursday 09 April 2026 04:52:13 +0000 (0:00:02.882) 0:00:13.397 ******** 2026-04-09 04:52:15.155999 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 04:52:15.156020 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 04:52:15.156041 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 04:52:15.156060 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 04:52:15.156091 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 04:52:22.477705 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 04:52:22.477845 | orchestrator | 2026-04-09 04:52:22.477864 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-09 04:52:22.477876 | orchestrator | Thursday 09 April 2026 04:52:16 +0000 (0:00:03.114) 0:00:16.512 ******** 2026-04-09 04:52:22.477904 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 04:52:22.477917 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 04:52:22.477928 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 04:52:22.477940 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 04:52:22.477951 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 04:52:22.477990 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 04:52:22.478003 | orchestrator | 2026-04-09 04:52:22.478077 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-04-09 04:52:22.478091 | orchestrator | Thursday 09 April 2026 04:52:20 +0000 (0:00:04.243) 0:00:20.755 ******** 2026-04-09 04:52:22.478108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 04:52:22.478121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 04:52:22.478133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 04:52:22.478145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 04:52:22.478183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 04:52:22.478213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 04:52:51.851932 | orchestrator | 2026-04-09 04:52:51.852047 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-04-09 04:52:51.852065 | orchestrator | Thursday 09 April 2026 04:52:23 +0000 (0:00:03.076) 0:00:23.831 ******** 2026-04-09 04:52:51.852078 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 04:52:51.852091 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 04:52:51.852103 | orchestrator | } 2026-04-09 04:52:51.852115 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 04:52:51.852126 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 04:52:51.852137 | orchestrator | } 2026-04-09 04:52:51.852148 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 04:52:51.852160 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 04:52:51.852171 | orchestrator | } 2026-04-09 04:52:51.852183 | orchestrator | 2026-04-09 04:52:51.852195 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 04:52:51.852206 | orchestrator | Thursday 09 April 2026 04:52:25 +0000 (0:00:01.447) 0:00:25.279 ******** 2026-04-09 04:52:51.852277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-09 04:52:51.852294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-09 04:52:51.852307 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:52:51.852319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-09 04:52:51.852356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-09 04:52:51.852368 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:52:51.852380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-09 04:52:51.852410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-09 04:52:51.852423 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:52:51.852434 | orchestrator | 2026-04-09 04:52:51.852445 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-09 04:52:51.852457 | orchestrator | Thursday 09 April 2026 04:52:27 +0000 (0:00:02.092) 0:00:27.372 ******** 2026-04-09 04:52:51.852470 | orchestrator | 2026-04-09 04:52:51.852489 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-09 04:52:51.852502 | orchestrator | Thursday 09 April 2026 04:52:27 +0000 (0:00:00.464) 0:00:27.836 ******** 2026-04-09 04:52:51.852516 | orchestrator | 2026-04-09 04:52:51.852529 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-09 04:52:51.852542 | orchestrator | Thursday 09 April 2026 04:52:28 +0000 (0:00:00.446) 0:00:28.283 ******** 2026-04-09 04:52:51.852554 | orchestrator | 2026-04-09 04:52:51.852567 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-09 04:52:51.852580 | orchestrator | Thursday 09 April 2026 04:52:28 +0000 (0:00:00.800) 0:00:29.083 ******** 2026-04-09 04:52:51.852593 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:52:51.852607 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:52:51.852619 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:52:51.852632 | orchestrator | 2026-04-09 04:52:51.852645 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-09 04:52:51.852657 | orchestrator | Thursday 09 April 2026 04:52:40 +0000 (0:00:11.119) 0:00:40.202 ******** 2026-04-09 04:52:51.852670 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:52:51.852683 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:52:51.852694 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:52:51.852705 | orchestrator | 2026-04-09 04:52:51.852716 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 04:52:51.852729 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 04:52:51.852750 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 04:52:51.852761 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 04:52:51.852772 | orchestrator | 2026-04-09 04:52:51.852783 | orchestrator | 2026-04-09 04:52:51.852794 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 04:52:51.852805 | orchestrator | Thursday 09 April 2026 04:52:51 +0000 (0:00:11.541) 0:00:51.744 ******** 2026-04-09 04:52:51.852816 | orchestrator | =============================================================================== 2026-04-09 04:52:51.852827 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 11.54s 2026-04-09 04:52:51.852838 | orchestrator | redis : Restart redis container ---------------------------------------- 11.12s 2026-04-09 04:52:51.852849 | orchestrator | redis : Copying over redis config files --------------------------------- 4.24s 2026-04-09 04:52:51.852860 | orchestrator | redis : include_tasks --------------------------------------------------- 3.85s 2026-04-09 04:52:51.852871 | orchestrator | redis : Copying over default config.json files -------------------------- 3.11s 2026-04-09 04:52:51.852881 | orchestrator | service-check-containers : redis | Check containers --------------------- 3.08s 2026-04-09 04:52:51.852892 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.88s 2026-04-09 04:52:51.852903 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.54s 2026-04-09 04:52:51.852914 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.09s 2026-04-09 04:52:51.852925 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.03s 2026-04-09 04:52:51.852936 | orchestrator | redis : Flush handlers -------------------------------------------------- 1.71s 2026-04-09 04:52:51.852947 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 1.45s 2026-04-09 04:52:52.062925 | orchestrator | + osism apply -a upgrade mariadb 2026-04-09 04:52:53.445937 | orchestrator | 2026-04-09 04:52:53 | INFO  | Prepare task for execution of mariadb. 2026-04-09 04:52:53.515852 | orchestrator | 2026-04-09 04:52:53 | INFO  | Task 25058933-d6a3-4eb0-adab-c06600637fc1 (mariadb) was prepared for execution. 2026-04-09 04:52:53.515947 | orchestrator | 2026-04-09 04:52:53 | INFO  | It takes a moment until task 25058933-d6a3-4eb0-adab-c06600637fc1 (mariadb) has been started and output is visible here. 2026-04-09 04:53:23.340337 | orchestrator | 2026-04-09 04:53:23.340418 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 04:53:23.340426 | orchestrator | 2026-04-09 04:53:23.340432 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 04:53:23.340437 | orchestrator | Thursday 09 April 2026 04:52:59 +0000 (0:00:02.658) 0:00:02.658 ******** 2026-04-09 04:53:23.340442 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:53:23.340447 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:53:23.340452 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:53:23.340457 | orchestrator | 2026-04-09 04:53:23.340462 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 04:53:23.340467 | orchestrator | Thursday 09 April 2026 04:53:02 +0000 (0:00:02.386) 0:00:05.044 ******** 2026-04-09 04:53:23.340472 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-09 04:53:23.340477 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-09 04:53:23.340481 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-09 04:53:23.340486 | orchestrator | 2026-04-09 04:53:23.340491 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-09 04:53:23.340496 | orchestrator | 2026-04-09 04:53:23.340501 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-09 04:53:23.340520 | orchestrator | Thursday 09 April 2026 04:53:04 +0000 (0:00:02.729) 0:00:07.773 ******** 2026-04-09 04:53:23.340525 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 04:53:23.340530 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-09 04:53:23.340544 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-09 04:53:23.340549 | orchestrator | 2026-04-09 04:53:23.340554 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-09 04:53:23.340559 | orchestrator | Thursday 09 April 2026 04:53:07 +0000 (0:00:02.373) 0:00:10.147 ******** 2026-04-09 04:53:23.340564 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:53:23.340569 | orchestrator | 2026-04-09 04:53:23.340574 | orchestrator | TASK [mariadb : Remove mariadb-clustercheck] *********************************** 2026-04-09 04:53:23.340578 | orchestrator | Thursday 09 April 2026 04:53:09 +0000 (0:00:01.861) 0:00:12.009 ******** 2026-04-09 04:53:23.340583 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:53:23.340588 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:53:23.340592 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:53:23.340597 | orchestrator | 2026-04-09 04:53:23.340602 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-09 04:53:23.340606 | orchestrator | Thursday 09 April 2026 04:53:11 +0000 (0:00:02.814) 0:00:14.823 ******** 2026-04-09 04:53:23.340614 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 04:53:23.340646 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 04:53:23.340659 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 04:53:23.340665 | orchestrator | 2026-04-09 04:53:23.340670 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-09 04:53:23.340674 | orchestrator | Thursday 09 April 2026 04:53:16 +0000 (0:00:04.164) 0:00:18.988 ******** 2026-04-09 04:53:23.340679 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:53:23.340686 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:53:23.340694 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:53:23.340701 | orchestrator | 2026-04-09 04:53:23.340708 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-09 04:53:23.340715 | orchestrator | Thursday 09 April 2026 04:53:17 +0000 (0:00:01.698) 0:00:20.687 ******** 2026-04-09 04:53:23.340722 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:53:23.340729 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:53:23.340737 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:53:23.340744 | orchestrator | 2026-04-09 04:53:23.340752 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-09 04:53:23.340758 | orchestrator | Thursday 09 April 2026 04:53:19 +0000 (0:00:02.264) 0:00:22.951 ******** 2026-04-09 04:53:23.340776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 04:53:36.185484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 04:53:36.185650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 04:53:36.185713 | orchestrator | 2026-04-09 04:53:36.185739 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-09 04:53:36.185761 | orchestrator | Thursday 09 April 2026 04:53:24 +0000 (0:00:04.587) 0:00:27.539 ******** 2026-04-09 04:53:36.185780 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:53:36.185800 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:53:36.185819 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:53:36.185838 | orchestrator | 2026-04-09 04:53:36.185858 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-09 04:53:36.185900 | orchestrator | Thursday 09 April 2026 04:53:26 +0000 (0:00:02.052) 0:00:29.591 ******** 2026-04-09 04:53:36.185922 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:53:36.185941 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:53:36.185959 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:53:36.185978 | orchestrator | 2026-04-09 04:53:36.185998 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-09 04:53:36.186097 | orchestrator | Thursday 09 April 2026 04:53:31 +0000 (0:00:05.310) 0:00:34.902 ******** 2026-04-09 04:53:36.186123 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:53:36.186144 | orchestrator | 2026-04-09 04:53:36.186164 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-09 04:53:36.186184 | orchestrator | Thursday 09 April 2026 04:53:33 +0000 (0:00:01.740) 0:00:36.643 ******** 2026-04-09 04:53:36.186207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 04:53:36.186244 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:53:36.186292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 04:53:43.067644 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:53:43.067777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 04:53:43.067840 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:53:43.067864 | orchestrator | 2026-04-09 04:53:43.067879 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-09 04:53:43.067891 | orchestrator | Thursday 09 April 2026 04:53:37 +0000 (0:00:03.817) 0:00:40.460 ******** 2026-04-09 04:53:43.067926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 04:53:43.067948 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:53:43.067997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 04:53:43.068033 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:53:43.068053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 04:53:43.068066 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:53:43.068077 | orchestrator | 2026-04-09 04:53:43.068089 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-09 04:53:43.068100 | orchestrator | Thursday 09 April 2026 04:53:40 +0000 (0:00:03.480) 0:00:43.941 ******** 2026-04-09 04:53:43.068122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 04:53:48.307595 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:53:48.307714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 04:53:48.307729 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:53:48.307738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 04:53:48.307761 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:53:48.307769 | orchestrator | 2026-04-09 04:53:48.307776 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-04-09 04:53:48.307785 | orchestrator | Thursday 09 April 2026 04:53:45 +0000 (0:00:04.035) 0:00:47.976 ******** 2026-04-09 04:53:48.307807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 04:53:48.307851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 04:53:48.307873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 04:54:04.165019 | orchestrator | 2026-04-09 04:54:04.165164 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-04-09 04:54:04.165196 | orchestrator | Thursday 09 April 2026 04:53:49 +0000 (0:00:04.433) 0:00:52.410 ******** 2026-04-09 04:54:04.165217 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 04:54:04.165232 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 04:54:04.165244 | orchestrator | } 2026-04-09 04:54:04.165256 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 04:54:04.165267 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 04:54:04.165278 | orchestrator | } 2026-04-09 04:54:04.165290 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 04:54:04.165301 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 04:54:04.165312 | orchestrator | } 2026-04-09 04:54:04.165323 | orchestrator | 2026-04-09 04:54:04.165351 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 04:54:04.165362 | orchestrator | Thursday 09 April 2026 04:53:50 +0000 (0:00:01.490) 0:00:53.900 ******** 2026-04-09 04:54:04.165378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 04:54:04.165457 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:04.165496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 04:54:04.165511 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:54:04.165529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 04:54:04.165553 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:54:04.165566 | orchestrator | 2026-04-09 04:54:04.165580 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-04-09 04:54:04.165593 | orchestrator | Thursday 09 April 2026 04:53:54 +0000 (0:00:03.841) 0:00:57.742 ******** 2026-04-09 04:54:04.165606 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:04.165619 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:54:04.165637 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:54:04.165661 | orchestrator | 2026-04-09 04:54:04.165688 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-04-09 04:54:04.165706 | orchestrator | Thursday 09 April 2026 04:53:56 +0000 (0:00:01.629) 0:00:59.372 ******** 2026-04-09 04:54:04.165725 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:04.165744 | orchestrator | 2026-04-09 04:54:04.165761 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-04-09 04:54:04.165779 | orchestrator | Thursday 09 April 2026 04:53:57 +0000 (0:00:01.214) 0:01:00.586 ******** 2026-04-09 04:54:04.165800 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:04.165814 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:54:04.165828 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:54:04.165841 | orchestrator | 2026-04-09 04:54:04.165855 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-04-09 04:54:04.165868 | orchestrator | Thursday 09 April 2026 04:53:59 +0000 (0:00:01.467) 0:01:02.054 ******** 2026-04-09 04:54:04.165886 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:04.165905 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:54:04.165923 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:54:04.165941 | orchestrator | 2026-04-09 04:54:04.165959 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-04-09 04:54:04.165975 | orchestrator | Thursday 09 April 2026 04:54:00 +0000 (0:00:01.507) 0:01:03.561 ******** 2026-04-09 04:54:04.165994 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:04.166012 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:54:04.166097 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:54:04.166109 | orchestrator | 2026-04-09 04:54:04.166121 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-04-09 04:54:04.166132 | orchestrator | Thursday 09 April 2026 04:54:02 +0000 (0:00:01.652) 0:01:05.214 ******** 2026-04-09 04:54:04.166142 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:04.166153 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:54:04.166164 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:54:04.166175 | orchestrator | 2026-04-09 04:54:04.166185 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-04-09 04:54:04.166196 | orchestrator | Thursday 09 April 2026 04:54:03 +0000 (0:00:01.431) 0:01:06.645 ******** 2026-04-09 04:54:04.166207 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:04.166218 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:54:04.166228 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:54:04.166239 | orchestrator | 2026-04-09 04:54:04.166262 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-04-09 04:54:22.718977 | orchestrator | Thursday 09 April 2026 04:54:05 +0000 (0:00:01.479) 0:01:08.125 ******** 2026-04-09 04:54:22.719088 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:22.719104 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:54:22.719116 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:54:22.719127 | orchestrator | 2026-04-09 04:54:22.719139 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-04-09 04:54:22.719151 | orchestrator | Thursday 09 April 2026 04:54:06 +0000 (0:00:01.546) 0:01:09.672 ******** 2026-04-09 04:54:22.719162 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 04:54:22.719190 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 04:54:22.719201 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 04:54:22.719212 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:22.719223 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-09 04:54:22.719234 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-09 04:54:22.719244 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-09 04:54:22.719255 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:54:22.719266 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-09 04:54:22.719277 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-09 04:54:22.719287 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-09 04:54:22.719298 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:54:22.719309 | orchestrator | 2026-04-09 04:54:22.719321 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-04-09 04:54:22.719332 | orchestrator | Thursday 09 April 2026 04:54:08 +0000 (0:00:01.761) 0:01:11.433 ******** 2026-04-09 04:54:22.719343 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:22.719354 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:54:22.719364 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:54:22.719376 | orchestrator | 2026-04-09 04:54:22.719387 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-04-09 04:54:22.719398 | orchestrator | Thursday 09 April 2026 04:54:09 +0000 (0:00:01.398) 0:01:12.832 ******** 2026-04-09 04:54:22.719409 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:22.719420 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:54:22.719430 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:54:22.719467 | orchestrator | 2026-04-09 04:54:22.719478 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-04-09 04:54:22.719489 | orchestrator | Thursday 09 April 2026 04:54:11 +0000 (0:00:01.345) 0:01:14.177 ******** 2026-04-09 04:54:22.719500 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:22.719512 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:54:22.719523 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:54:22.719534 | orchestrator | 2026-04-09 04:54:22.719545 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-04-09 04:54:22.719556 | orchestrator | Thursday 09 April 2026 04:54:12 +0000 (0:00:01.534) 0:01:15.712 ******** 2026-04-09 04:54:22.719567 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:22.719578 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:54:22.719589 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:54:22.719600 | orchestrator | 2026-04-09 04:54:22.719612 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-04-09 04:54:22.719623 | orchestrator | Thursday 09 April 2026 04:54:14 +0000 (0:00:01.400) 0:01:17.113 ******** 2026-04-09 04:54:22.719634 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:22.719645 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:54:22.719656 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:54:22.719667 | orchestrator | 2026-04-09 04:54:22.719678 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-04-09 04:54:22.719689 | orchestrator | Thursday 09 April 2026 04:54:15 +0000 (0:00:01.440) 0:01:18.554 ******** 2026-04-09 04:54:22.719722 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:22.719734 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:54:22.719745 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:54:22.719756 | orchestrator | 2026-04-09 04:54:22.719767 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-04-09 04:54:22.719779 | orchestrator | Thursday 09 April 2026 04:54:16 +0000 (0:00:01.380) 0:01:19.935 ******** 2026-04-09 04:54:22.719790 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:22.719801 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:54:22.719812 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:54:22.719823 | orchestrator | 2026-04-09 04:54:22.719834 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-04-09 04:54:22.719845 | orchestrator | Thursday 09 April 2026 04:54:18 +0000 (0:00:01.731) 0:01:21.666 ******** 2026-04-09 04:54:22.719856 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:22.719866 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:54:22.719877 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:54:22.719888 | orchestrator | 2026-04-09 04:54:22.719899 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-04-09 04:54:22.719910 | orchestrator | Thursday 09 April 2026 04:54:20 +0000 (0:00:01.431) 0:01:23.098 ******** 2026-04-09 04:54:22.719953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 04:54:22.719970 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:22.719983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 04:54:22.720003 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:54:22.720030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 04:54:40.835644 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:54:40.835787 | orchestrator | 2026-04-09 04:54:40.835815 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-04-09 04:54:40.835836 | orchestrator | Thursday 09 April 2026 04:54:23 +0000 (0:00:03.685) 0:01:26.784 ******** 2026-04-09 04:54:40.835856 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:40.835875 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:54:40.835894 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:54:40.835912 | orchestrator | 2026-04-09 04:54:40.835932 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-04-09 04:54:40.835952 | orchestrator | Thursday 09 April 2026 04:54:25 +0000 (0:00:01.395) 0:01:28.180 ******** 2026-04-09 04:54:40.835978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 04:54:40.836038 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:40.836099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 04:54:40.836125 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:54:40.836148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 04:54:40.836177 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:54:40.836196 | orchestrator | 2026-04-09 04:54:40.836216 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-04-09 04:54:40.836236 | orchestrator | Thursday 09 April 2026 04:54:28 +0000 (0:00:03.762) 0:01:31.942 ******** 2026-04-09 04:54:40.836255 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:40.836275 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:54:40.836296 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:54:40.836316 | orchestrator | 2026-04-09 04:54:40.836336 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-09 04:54:40.836356 | orchestrator | Thursday 09 April 2026 04:54:30 +0000 (0:00:01.822) 0:01:33.765 ******** 2026-04-09 04:54:40.836375 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:40.836395 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:54:40.836416 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:54:40.836435 | orchestrator | 2026-04-09 04:54:40.836454 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-09 04:54:40.836475 | orchestrator | Thursday 09 April 2026 04:54:32 +0000 (0:00:01.360) 0:01:35.125 ******** 2026-04-09 04:54:40.836526 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:40.836544 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:54:40.836562 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:54:40.836581 | orchestrator | 2026-04-09 04:54:40.836601 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-09 04:54:40.836619 | orchestrator | Thursday 09 April 2026 04:54:33 +0000 (0:00:01.359) 0:01:36.485 ******** 2026-04-09 04:54:40.836637 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:40.836655 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:54:40.836673 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:54:40.836691 | orchestrator | 2026-04-09 04:54:40.836709 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-09 04:54:40.836727 | orchestrator | Thursday 09 April 2026 04:54:35 +0000 (0:00:01.818) 0:01:38.304 ******** 2026-04-09 04:54:40.836745 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:54:40.836763 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:54:40.836780 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:54:40.836799 | orchestrator | 2026-04-09 04:54:40.836818 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-09 04:54:40.836836 | orchestrator | Thursday 09 April 2026 04:54:37 +0000 (0:00:01.804) 0:01:40.108 ******** 2026-04-09 04:54:40.836854 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:54:40.836874 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:54:40.836905 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:54:40.836924 | orchestrator | 2026-04-09 04:54:40.836951 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-09 04:54:40.836971 | orchestrator | Thursday 09 April 2026 04:54:39 +0000 (0:00:02.169) 0:01:42.278 ******** 2026-04-09 04:54:40.836990 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:54:40.837009 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:54:40.837027 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:54:40.837046 | orchestrator | 2026-04-09 04:54:40.837065 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-09 04:54:40.837084 | orchestrator | Thursday 09 April 2026 04:54:40 +0000 (0:00:01.406) 0:01:43.685 ******** 2026-04-09 04:54:40.837107 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:57:20.846487 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:57:20.846637 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:57:20.846666 | orchestrator | 2026-04-09 04:57:20.846688 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-09 04:57:20.846712 | orchestrator | Thursday 09 April 2026 04:54:42 +0000 (0:00:01.361) 0:01:45.046 ******** 2026-04-09 04:57:20.846730 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:57:20.846749 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:57:20.846766 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:57:20.846784 | orchestrator | 2026-04-09 04:57:20.846802 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-09 04:57:20.846854 | orchestrator | Thursday 09 April 2026 04:54:43 +0000 (0:00:01.799) 0:01:46.845 ******** 2026-04-09 04:57:20.846873 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:57:20.846890 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:57:20.846909 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:57:20.846928 | orchestrator | 2026-04-09 04:57:20.846946 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-09 04:57:20.846966 | orchestrator | Thursday 09 April 2026 04:54:45 +0000 (0:00:01.807) 0:01:48.653 ******** 2026-04-09 04:57:20.846986 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:57:20.847007 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:57:20.847027 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:57:20.847045 | orchestrator | 2026-04-09 04:57:20.847064 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-09 04:57:20.847083 | orchestrator | Thursday 09 April 2026 04:54:47 +0000 (0:00:01.447) 0:01:50.101 ******** 2026-04-09 04:57:20.847102 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:57:20.847121 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:57:20.847142 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:57:20.847162 | orchestrator | 2026-04-09 04:57:20.847180 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-09 04:57:20.847199 | orchestrator | Thursday 09 April 2026 04:54:50 +0000 (0:00:03.575) 0:01:53.676 ******** 2026-04-09 04:57:20.847218 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:57:20.847237 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:57:20.847255 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:57:20.847274 | orchestrator | 2026-04-09 04:57:20.847293 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-09 04:57:20.847312 | orchestrator | Thursday 09 April 2026 04:54:52 +0000 (0:00:01.430) 0:01:55.107 ******** 2026-04-09 04:57:20.847332 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:57:20.847351 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:57:20.847369 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:57:20.847386 | orchestrator | 2026-04-09 04:57:20.847404 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-09 04:57:20.847424 | orchestrator | Thursday 09 April 2026 04:54:53 +0000 (0:00:01.401) 0:01:56.508 ******** 2026-04-09 04:57:20.847442 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:57:20.847461 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:57:20.847480 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:57:20.847498 | orchestrator | 2026-04-09 04:57:20.847516 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-09 04:57:20.847577 | orchestrator | Thursday 09 April 2026 04:54:55 +0000 (0:00:01.771) 0:01:58.280 ******** 2026-04-09 04:57:20.847598 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:57:20.847617 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:57:20.847635 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:57:20.847653 | orchestrator | 2026-04-09 04:57:20.847673 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-09 04:57:20.847692 | orchestrator | Thursday 09 April 2026 04:54:56 +0000 (0:00:01.390) 0:01:59.671 ******** 2026-04-09 04:57:20.847710 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:57:20.847729 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:57:20.847748 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:57:20.847767 | orchestrator | 2026-04-09 04:57:20.847786 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-09 04:57:20.847807 | orchestrator | Thursday 09 April 2026 04:54:58 +0000 (0:00:01.784) 0:02:01.455 ******** 2026-04-09 04:57:20.847856 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:57:20.847875 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:57:20.847893 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:57:20.847912 | orchestrator | 2026-04-09 04:57:20.847930 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-09 04:57:20.847949 | orchestrator | Thursday 09 April 2026 04:54:59 +0000 (0:00:01.502) 0:02:02.957 ******** 2026-04-09 04:57:20.847967 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:57:20.847985 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:57:20.848004 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:57:20.848022 | orchestrator | 2026-04-09 04:57:20.848041 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-09 04:57:20.848058 | orchestrator | 2026-04-09 04:57:20.848076 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-09 04:57:20.848095 | orchestrator | Thursday 09 April 2026 04:55:01 +0000 (0:00:01.938) 0:02:04.895 ******** 2026-04-09 04:57:20.848112 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:57:20.848129 | orchestrator | 2026-04-09 04:57:20.848147 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-09 04:57:20.848164 | orchestrator | Thursday 09 April 2026 04:55:28 +0000 (0:00:26.308) 0:02:31.204 ******** 2026-04-09 04:57:20.848182 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for MariaDB service port liveness (10 retries left). 2026-04-09 04:57:20.848201 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:57:20.848221 | orchestrator | 2026-04-09 04:57:20.848260 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-09 04:57:20.848282 | orchestrator | Thursday 09 April 2026 04:55:36 +0000 (0:00:08.170) 0:02:39.374 ******** 2026-04-09 04:57:20.848299 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:57:20.848318 | orchestrator | 2026-04-09 04:57:20.848338 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-09 04:57:20.848357 | orchestrator | 2026-04-09 04:57:20.848375 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-09 04:57:20.848393 | orchestrator | Thursday 09 April 2026 04:55:39 +0000 (0:00:03.033) 0:02:42.408 ******** 2026-04-09 04:57:20.848412 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:57:20.848430 | orchestrator | 2026-04-09 04:57:20.848478 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-09 04:57:20.848498 | orchestrator | Thursday 09 April 2026 04:56:05 +0000 (0:00:26.490) 0:03:08.898 ******** 2026-04-09 04:57:20.848516 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Wait for MariaDB service port liveness (10 retries left). 2026-04-09 04:57:20.848535 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:57:20.848553 | orchestrator | 2026-04-09 04:57:20.848571 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-09 04:57:20.848588 | orchestrator | Thursday 09 April 2026 04:56:13 +0000 (0:00:07.972) 0:03:16.870 ******** 2026-04-09 04:57:20.848599 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:57:20.848626 | orchestrator | 2026-04-09 04:57:20.848638 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-09 04:57:20.848649 | orchestrator | 2026-04-09 04:57:20.848660 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-09 04:57:20.848671 | orchestrator | Thursday 09 April 2026 04:56:16 +0000 (0:00:02.985) 0:03:19.855 ******** 2026-04-09 04:57:20.848682 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:57:20.848692 | orchestrator | 2026-04-09 04:57:20.848704 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-09 04:57:20.848714 | orchestrator | Thursday 09 April 2026 04:56:43 +0000 (0:00:26.210) 0:03:46.066 ******** 2026-04-09 04:57:20.848725 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:57:20.848736 | orchestrator | 2026-04-09 04:57:20.848747 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-09 04:57:20.848758 | orchestrator | Thursday 09 April 2026 04:56:47 +0000 (0:00:04.489) 0:03:50.555 ******** 2026-04-09 04:57:20.848769 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:57:20.848780 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-09 04:57:20.848791 | orchestrator | 2026-04-09 04:57:20.848802 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-09 04:57:20.848898 | orchestrator | skipping: no hosts matched 2026-04-09 04:57:20.848913 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-09 04:57:20.848924 | orchestrator | mariadb_bootstrap_restart 2026-04-09 04:57:20.848935 | orchestrator | 2026-04-09 04:57:20.848946 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-09 04:57:20.848957 | orchestrator | skipping: no hosts matched 2026-04-09 04:57:20.848968 | orchestrator | 2026-04-09 04:57:20.848979 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-09 04:57:20.848990 | orchestrator | 2026-04-09 04:57:20.849001 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-09 04:57:20.849012 | orchestrator | Thursday 09 April 2026 04:56:51 +0000 (0:00:03.950) 0:03:54.506 ******** 2026-04-09 04:57:20.849024 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:57:20.849034 | orchestrator | 2026-04-09 04:57:20.849046 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-09 04:57:20.849057 | orchestrator | Thursday 09 April 2026 04:56:53 +0000 (0:00:01.810) 0:03:56.316 ******** 2026-04-09 04:57:20.849068 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:57:20.849079 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:57:20.849090 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:57:20.849101 | orchestrator | 2026-04-09 04:57:20.849112 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-09 04:57:20.849123 | orchestrator | Thursday 09 April 2026 04:56:56 +0000 (0:00:03.185) 0:03:59.502 ******** 2026-04-09 04:57:20.849134 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:57:20.849145 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:57:20.849156 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:57:20.849167 | orchestrator | 2026-04-09 04:57:20.849178 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-09 04:57:20.849189 | orchestrator | Thursday 09 April 2026 04:56:59 +0000 (0:00:03.310) 0:04:02.813 ******** 2026-04-09 04:57:20.849200 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:57:20.849211 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:57:20.849221 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:57:20.849232 | orchestrator | 2026-04-09 04:57:20.849243 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-09 04:57:20.849254 | orchestrator | Thursday 09 April 2026 04:57:03 +0000 (0:00:03.325) 0:04:06.139 ******** 2026-04-09 04:57:20.849265 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:57:20.849276 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:57:20.849287 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:57:20.849306 | orchestrator | 2026-04-09 04:57:20.849317 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-09 04:57:20.849328 | orchestrator | Thursday 09 April 2026 04:57:06 +0000 (0:00:03.335) 0:04:09.474 ******** 2026-04-09 04:57:20.849339 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:57:20.849350 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:57:20.849361 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:57:20.849372 | orchestrator | 2026-04-09 04:57:20.849383 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-09 04:57:20.849394 | orchestrator | Thursday 09 April 2026 04:57:13 +0000 (0:00:06.746) 0:04:16.221 ******** 2026-04-09 04:57:20.849405 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:57:20.849415 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:57:20.849425 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:57:20.849435 | orchestrator | 2026-04-09 04:57:20.849445 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-09 04:57:20.849462 | orchestrator | Thursday 09 April 2026 04:57:16 +0000 (0:00:03.393) 0:04:19.614 ******** 2026-04-09 04:57:20.849473 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:57:20.849482 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:57:20.849492 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:57:20.849502 | orchestrator | 2026-04-09 04:57:20.849512 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-09 04:57:20.849522 | orchestrator | Thursday 09 April 2026 04:57:18 +0000 (0:00:01.393) 0:04:21.008 ******** 2026-04-09 04:57:20.849532 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:57:20.849542 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:57:20.849552 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:57:20.849562 | orchestrator | 2026-04-09 04:57:20.849581 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-09 04:57:41.937570 | orchestrator | Thursday 09 April 2026 04:57:21 +0000 (0:00:03.580) 0:04:24.589 ******** 2026-04-09 04:57:41.937701 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:57:41.937718 | orchestrator | 2026-04-09 04:57:41.937731 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-04-09 04:57:41.937743 | orchestrator | Thursday 09 April 2026 04:57:23 +0000 (0:00:02.073) 0:04:26.662 ******** 2026-04-09 04:57:41.937754 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:57:41.937777 | orchestrator | changed: [testbed-node-2] 2026-04-09 04:57:41.937789 | orchestrator | changed: [testbed-node-1] 2026-04-09 04:57:41.937801 | orchestrator | 2026-04-09 04:57:41.937813 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 04:57:41.937826 | orchestrator | testbed-node-0 : ok=35  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-09 04:57:41.937838 | orchestrator | testbed-node-1 : ok=27  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-09 04:57:41.937849 | orchestrator | testbed-node-2 : ok=27  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-09 04:57:41.937930 | orchestrator | 2026-04-09 04:57:41.937942 | orchestrator | 2026-04-09 04:57:41.937953 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 04:57:41.937965 | orchestrator | Thursday 09 April 2026 04:57:41 +0000 (0:00:17.842) 0:04:44.505 ******** 2026-04-09 04:57:41.937976 | orchestrator | =============================================================================== 2026-04-09 04:57:41.937986 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 79.01s 2026-04-09 04:57:41.937997 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 20.63s 2026-04-09 04:57:41.938008 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 17.84s 2026-04-09 04:57:41.938067 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 9.97s 2026-04-09 04:57:41.938107 | orchestrator | service-check : mariadb | Get container facts --------------------------- 6.75s 2026-04-09 04:57:41.938121 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 5.31s 2026-04-09 04:57:41.938134 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.59s 2026-04-09 04:57:41.938147 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 4.43s 2026-04-09 04:57:41.938160 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.16s 2026-04-09 04:57:41.938186 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 4.04s 2026-04-09 04:57:41.938199 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.84s 2026-04-09 04:57:41.938212 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.82s 2026-04-09 04:57:41.938224 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 3.76s 2026-04-09 04:57:41.938237 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 3.69s 2026-04-09 04:57:41.938249 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.58s 2026-04-09 04:57:41.938262 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 3.58s 2026-04-09 04:57:41.938275 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.48s 2026-04-09 04:57:41.938288 | orchestrator | service-check : mariadb | Fail if containers are missing or not running --- 3.39s 2026-04-09 04:57:41.938301 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 3.33s 2026-04-09 04:57:41.938314 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 3.33s 2026-04-09 04:57:42.143573 | orchestrator | + osism apply -a upgrade rabbitmq 2026-04-09 04:57:43.463659 | orchestrator | 2026-04-09 04:57:43 | INFO  | Prepare task for execution of rabbitmq. 2026-04-09 04:57:43.530305 | orchestrator | 2026-04-09 04:57:43 | INFO  | Task 21caa9b3-f5c5-4b76-9472-917a904bceaf (rabbitmq) was prepared for execution. 2026-04-09 04:57:43.530411 | orchestrator | 2026-04-09 04:57:43 | INFO  | It takes a moment until task 21caa9b3-f5c5-4b76-9472-917a904bceaf (rabbitmq) has been started and output is visible here. 2026-04-09 04:58:26.486142 | orchestrator | 2026-04-09 04:58:26.486275 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 04:58:26.486301 | orchestrator | 2026-04-09 04:58:26.486318 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 04:58:26.486335 | orchestrator | Thursday 09 April 2026 04:57:48 +0000 (0:00:02.016) 0:00:02.017 ******** 2026-04-09 04:58:26.486354 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:58:26.486373 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:58:26.486390 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:58:26.486406 | orchestrator | 2026-04-09 04:58:26.486423 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 04:58:26.486440 | orchestrator | Thursday 09 April 2026 04:57:50 +0000 (0:00:01.784) 0:00:03.802 ******** 2026-04-09 04:58:26.486459 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-09 04:58:26.486475 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-09 04:58:26.486492 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-09 04:58:26.486510 | orchestrator | 2026-04-09 04:58:26.486528 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-09 04:58:26.486544 | orchestrator | 2026-04-09 04:58:26.486559 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-09 04:58:26.486574 | orchestrator | Thursday 09 April 2026 04:57:52 +0000 (0:00:01.811) 0:00:05.613 ******** 2026-04-09 04:58:26.486589 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:58:26.486607 | orchestrator | 2026-04-09 04:58:26.486625 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-09 04:58:26.486675 | orchestrator | Thursday 09 April 2026 04:57:54 +0000 (0:00:02.162) 0:00:07.776 ******** 2026-04-09 04:58:26.486695 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:58:26.486708 | orchestrator | 2026-04-09 04:58:26.486720 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-09 04:58:26.486732 | orchestrator | Thursday 09 April 2026 04:57:57 +0000 (0:00:02.956) 0:00:10.732 ******** 2026-04-09 04:58:26.486744 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:58:26.486755 | orchestrator | 2026-04-09 04:58:26.486767 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-09 04:58:26.486778 | orchestrator | Thursday 09 April 2026 04:58:00 +0000 (0:00:03.216) 0:00:13.949 ******** 2026-04-09 04:58:26.486790 | orchestrator | changed: [testbed-node-0] 2026-04-09 04:58:26.486802 | orchestrator | 2026-04-09 04:58:26.486814 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-09 04:58:26.486825 | orchestrator | Thursday 09 April 2026 04:58:11 +0000 (0:00:10.258) 0:00:24.208 ******** 2026-04-09 04:58:26.486837 | orchestrator | ok: [testbed-node-0] => { 2026-04-09 04:58:26.486848 | orchestrator |  "changed": false, 2026-04-09 04:58:26.486861 | orchestrator |  "msg": "All assertions passed" 2026-04-09 04:58:26.486873 | orchestrator | } 2026-04-09 04:58:26.486884 | orchestrator | 2026-04-09 04:58:26.486894 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-09 04:58:26.486904 | orchestrator | Thursday 09 April 2026 04:58:12 +0000 (0:00:01.372) 0:00:25.580 ******** 2026-04-09 04:58:26.486914 | orchestrator | ok: [testbed-node-0] => { 2026-04-09 04:58:26.486924 | orchestrator |  "changed": false, 2026-04-09 04:58:26.486959 | orchestrator |  "msg": "All assertions passed" 2026-04-09 04:58:26.486970 | orchestrator | } 2026-04-09 04:58:26.486980 | orchestrator | 2026-04-09 04:58:26.486989 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-09 04:58:26.487038 | orchestrator | Thursday 09 April 2026 04:58:14 +0000 (0:00:01.662) 0:00:27.244 ******** 2026-04-09 04:58:26.487064 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:58:26.487075 | orchestrator | 2026-04-09 04:58:26.487085 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-09 04:58:26.487095 | orchestrator | Thursday 09 April 2026 04:58:16 +0000 (0:00:01.929) 0:00:29.174 ******** 2026-04-09 04:58:26.487104 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:58:26.487114 | orchestrator | 2026-04-09 04:58:26.487124 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-09 04:58:26.487134 | orchestrator | Thursday 09 April 2026 04:58:18 +0000 (0:00:02.266) 0:00:31.441 ******** 2026-04-09 04:58:26.487143 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:58:26.487153 | orchestrator | 2026-04-09 04:58:26.487162 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-09 04:58:26.487172 | orchestrator | Thursday 09 April 2026 04:58:21 +0000 (0:00:02.891) 0:00:34.332 ******** 2026-04-09 04:58:26.487182 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:58:26.487191 | orchestrator | 2026-04-09 04:58:26.487201 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-09 04:58:26.487211 | orchestrator | Thursday 09 April 2026 04:58:23 +0000 (0:00:01.720) 0:00:36.053 ******** 2026-04-09 04:58:26.487250 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 04:58:26.487280 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 04:58:26.487293 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 04:58:26.487304 | orchestrator | 2026-04-09 04:58:26.487314 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-09 04:58:26.487324 | orchestrator | Thursday 09 April 2026 04:58:25 +0000 (0:00:02.114) 0:00:38.168 ******** 2026-04-09 04:58:26.487335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 04:58:26.487360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 04:58:46.559730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 04:58:46.559862 | orchestrator | 2026-04-09 04:58:46.559890 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-09 04:58:46.559911 | orchestrator | Thursday 09 April 2026 04:58:27 +0000 (0:00:02.513) 0:00:40.682 ******** 2026-04-09 04:58:46.559929 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-09 04:58:46.559948 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-09 04:58:46.559967 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-09 04:58:46.560032 | orchestrator | 2026-04-09 04:58:46.560052 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-09 04:58:46.560070 | orchestrator | Thursday 09 April 2026 04:58:30 +0000 (0:00:02.405) 0:00:43.087 ******** 2026-04-09 04:58:46.560090 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-09 04:58:46.560108 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-09 04:58:46.560127 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-09 04:58:46.560145 | orchestrator | 2026-04-09 04:58:46.560163 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-09 04:58:46.560182 | orchestrator | Thursday 09 April 2026 04:58:32 +0000 (0:00:02.724) 0:00:45.812 ******** 2026-04-09 04:58:46.560200 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-09 04:58:46.560219 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-09 04:58:46.560237 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-09 04:58:46.560255 | orchestrator | 2026-04-09 04:58:46.560274 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-09 04:58:46.560324 | orchestrator | Thursday 09 April 2026 04:58:35 +0000 (0:00:02.280) 0:00:48.092 ******** 2026-04-09 04:58:46.560344 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-09 04:58:46.560363 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-09 04:58:46.560383 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-09 04:58:46.560402 | orchestrator | 2026-04-09 04:58:46.560420 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-09 04:58:46.560435 | orchestrator | Thursday 09 April 2026 04:58:37 +0000 (0:00:02.584) 0:00:50.676 ******** 2026-04-09 04:58:46.560446 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-09 04:58:46.560457 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-09 04:58:46.560468 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-09 04:58:46.560479 | orchestrator | 2026-04-09 04:58:46.560490 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-09 04:58:46.560501 | orchestrator | Thursday 09 April 2026 04:58:39 +0000 (0:00:02.274) 0:00:52.951 ******** 2026-04-09 04:58:46.560512 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-09 04:58:46.560523 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-09 04:58:46.560533 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-09 04:58:46.560590 | orchestrator | 2026-04-09 04:58:46.560630 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-09 04:58:46.560643 | orchestrator | Thursday 09 April 2026 04:58:42 +0000 (0:00:02.278) 0:00:55.229 ******** 2026-04-09 04:58:46.560654 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 04:58:46.560665 | orchestrator | 2026-04-09 04:58:46.560696 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-04-09 04:58:46.560708 | orchestrator | Thursday 09 April 2026 04:58:44 +0000 (0:00:01.835) 0:00:57.065 ******** 2026-04-09 04:58:46.560724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 04:58:46.560747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 04:58:46.560781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 04:58:46.560801 | orchestrator | 2026-04-09 04:58:46.560822 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-04-09 04:58:46.560841 | orchestrator | Thursday 09 April 2026 04:58:46 +0000 (0:00:02.400) 0:00:59.465 ******** 2026-04-09 04:58:46.560886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 04:58:55.291964 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:58:55.292113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 04:58:55.292148 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:58:55.292155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 04:58:55.292162 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:58:55.292168 | orchestrator | 2026-04-09 04:58:55.292176 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-04-09 04:58:55.292183 | orchestrator | Thursday 09 April 2026 04:58:47 +0000 (0:00:01.410) 0:01:00.876 ******** 2026-04-09 04:58:55.292201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 04:58:55.292208 | orchestrator | skipping: [testbed-node-0] 2026-04-09 04:58:55.292230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 04:58:55.292237 | orchestrator | skipping: [testbed-node-1] 2026-04-09 04:58:55.292244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 04:58:55.292255 | orchestrator | skipping: [testbed-node-2] 2026-04-09 04:58:55.292261 | orchestrator | 2026-04-09 04:58:55.292267 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-09 04:58:55.292273 | orchestrator | Thursday 09 April 2026 04:58:49 +0000 (0:00:01.886) 0:01:02.762 ******** 2026-04-09 04:58:55.292279 | orchestrator | ok: [testbed-node-2] 2026-04-09 04:58:55.292287 | orchestrator | ok: [testbed-node-0] 2026-04-09 04:58:55.292293 | orchestrator | ok: [testbed-node-1] 2026-04-09 04:58:55.292299 | orchestrator | 2026-04-09 04:58:55.292305 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-04-09 04:58:55.292312 | orchestrator | Thursday 09 April 2026 04:58:54 +0000 (0:00:04.518) 0:01:07.281 ******** 2026-04-09 04:58:55.292319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 04:58:55.292333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 05:00:39.487769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 05:00:39.487917 | orchestrator | 2026-04-09 05:00:39.487937 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-04-09 05:00:39.487951 | orchestrator | Thursday 09 April 2026 04:58:56 +0000 (0:00:02.223) 0:01:09.505 ******** 2026-04-09 05:00:39.487963 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 05:00:39.487975 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 05:00:39.487986 | orchestrator | } 2026-04-09 05:00:39.487997 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 05:00:39.488008 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 05:00:39.488019 | orchestrator | } 2026-04-09 05:00:39.488031 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 05:00:39.488042 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 05:00:39.488053 | orchestrator | } 2026-04-09 05:00:39.488064 | orchestrator | 2026-04-09 05:00:39.488075 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 05:00:39.488169 | orchestrator | Thursday 09 April 2026 04:58:58 +0000 (0:00:01.631) 0:01:11.137 ******** 2026-04-09 05:00:39.488191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 05:00:39.488205 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:00:39.488233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 05:00:39.488255 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:00:39.488288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 05:00:39.488304 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:00:39.488317 | orchestrator | 2026-04-09 05:00:39.488330 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-09 05:00:39.488344 | orchestrator | Thursday 09 April 2026 04:59:00 +0000 (0:00:02.029) 0:01:13.166 ******** 2026-04-09 05:00:39.488357 | orchestrator | changed: [testbed-node-0] 2026-04-09 05:00:39.488370 | orchestrator | changed: [testbed-node-1] 2026-04-09 05:00:39.488383 | orchestrator | changed: [testbed-node-2] 2026-04-09 05:00:39.488395 | orchestrator | 2026-04-09 05:00:39.488409 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-09 05:00:39.488423 | orchestrator | 2026-04-09 05:00:39.488436 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-09 05:00:39.488449 | orchestrator | Thursday 09 April 2026 04:59:01 +0000 (0:00:01.595) 0:01:14.762 ******** 2026-04-09 05:00:39.488463 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:00:39.488477 | orchestrator | 2026-04-09 05:00:39.488490 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-09 05:00:39.488502 | orchestrator | Thursday 09 April 2026 04:59:03 +0000 (0:00:02.132) 0:01:16.895 ******** 2026-04-09 05:00:39.488515 | orchestrator | changed: [testbed-node-0] 2026-04-09 05:00:39.488528 | orchestrator | 2026-04-09 05:00:39.488541 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-09 05:00:39.488554 | orchestrator | Thursday 09 April 2026 04:59:13 +0000 (0:00:09.308) 0:01:26.203 ******** 2026-04-09 05:00:39.488567 | orchestrator | changed: [testbed-node-0] 2026-04-09 05:00:39.488580 | orchestrator | 2026-04-09 05:00:39.488592 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-09 05:00:39.488605 | orchestrator | Thursday 09 April 2026 04:59:22 +0000 (0:00:09.221) 0:01:35.424 ******** 2026-04-09 05:00:39.488618 | orchestrator | changed: [testbed-node-0] 2026-04-09 05:00:39.488631 | orchestrator | 2026-04-09 05:00:39.488643 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-09 05:00:39.488654 | orchestrator | 2026-04-09 05:00:39.488665 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-09 05:00:39.488676 | orchestrator | Thursday 09 April 2026 04:59:32 +0000 (0:00:10.067) 0:01:45.491 ******** 2026-04-09 05:00:39.488687 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:00:39.488698 | orchestrator | 2026-04-09 05:00:39.488709 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-09 05:00:39.488720 | orchestrator | Thursday 09 April 2026 04:59:34 +0000 (0:00:01.721) 0:01:47.213 ******** 2026-04-09 05:00:39.488731 | orchestrator | changed: [testbed-node-1] 2026-04-09 05:00:39.488742 | orchestrator | 2026-04-09 05:00:39.488753 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-09 05:00:39.488771 | orchestrator | Thursday 09 April 2026 04:59:42 +0000 (0:00:08.540) 0:01:55.753 ******** 2026-04-09 05:00:39.488782 | orchestrator | changed: [testbed-node-1] 2026-04-09 05:00:39.488793 | orchestrator | 2026-04-09 05:00:39.488804 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-09 05:00:39.488815 | orchestrator | Thursday 09 April 2026 04:59:56 +0000 (0:00:13.474) 0:02:09.228 ******** 2026-04-09 05:00:39.488826 | orchestrator | changed: [testbed-node-1] 2026-04-09 05:00:39.488837 | orchestrator | 2026-04-09 05:00:39.488848 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-09 05:00:39.488859 | orchestrator | 2026-04-09 05:00:39.488874 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-09 05:00:39.488885 | orchestrator | Thursday 09 April 2026 05:00:05 +0000 (0:00:09.102) 0:02:18.330 ******** 2026-04-09 05:00:39.488896 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:00:39.488907 | orchestrator | 2026-04-09 05:00:39.488919 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-09 05:00:39.488929 | orchestrator | Thursday 09 April 2026 05:00:06 +0000 (0:00:01.634) 0:02:19.964 ******** 2026-04-09 05:00:39.488940 | orchestrator | changed: [testbed-node-2] 2026-04-09 05:00:39.488951 | orchestrator | 2026-04-09 05:00:39.488962 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-09 05:00:39.488973 | orchestrator | Thursday 09 April 2026 05:00:15 +0000 (0:00:08.536) 0:02:28.501 ******** 2026-04-09 05:00:39.488984 | orchestrator | changed: [testbed-node-2] 2026-04-09 05:00:39.488995 | orchestrator | 2026-04-09 05:00:39.489006 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-09 05:00:39.489017 | orchestrator | Thursday 09 April 2026 05:00:29 +0000 (0:00:14.366) 0:02:42.867 ******** 2026-04-09 05:00:39.489027 | orchestrator | changed: [testbed-node-2] 2026-04-09 05:00:39.489038 | orchestrator | 2026-04-09 05:00:39.489049 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-09 05:00:39.489060 | orchestrator | 2026-04-09 05:00:39.489071 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-09 05:00:39.489129 | orchestrator | Thursday 09 April 2026 05:00:39 +0000 (0:00:09.645) 0:02:52.513 ******** 2026-04-09 05:00:45.795993 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 05:00:45.796140 | orchestrator | 2026-04-09 05:00:45.796167 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-09 05:00:45.796187 | orchestrator | Thursday 09 April 2026 05:00:40 +0000 (0:00:01.516) 0:02:54.029 ******** 2026-04-09 05:00:45.796205 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:00:45.796223 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:00:45.796240 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:00:45.796258 | orchestrator | 2026-04-09 05:00:45.796277 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 05:00:45.796299 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 05:00:45.796319 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 05:00:45.796335 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 05:00:45.796346 | orchestrator | 2026-04-09 05:00:45.796357 | orchestrator | 2026-04-09 05:00:45.796369 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 05:00:45.796380 | orchestrator | Thursday 09 April 2026 05:00:45 +0000 (0:00:04.385) 0:02:58.414 ******** 2026-04-09 05:00:45.796391 | orchestrator | =============================================================================== 2026-04-09 05:00:45.796402 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 37.06s 2026-04-09 05:00:45.796414 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 28.81s 2026-04-09 05:00:45.796452 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 26.38s 2026-04-09 05:00:45.796465 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------ 10.26s 2026-04-09 05:00:45.796488 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 5.49s 2026-04-09 05:00:45.796516 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.52s 2026-04-09 05:00:45.796535 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.39s 2026-04-09 05:00:45.796553 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 3.22s 2026-04-09 05:00:45.796570 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.96s 2026-04-09 05:00:45.796587 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 2.89s 2026-04-09 05:00:45.796605 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.72s 2026-04-09 05:00:45.796623 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.58s 2026-04-09 05:00:45.796642 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.51s 2026-04-09 05:00:45.796659 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.41s 2026-04-09 05:00:45.796678 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 2.40s 2026-04-09 05:00:45.796696 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.28s 2026-04-09 05:00:45.796718 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.28s 2026-04-09 05:00:45.796737 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.28s 2026-04-09 05:00:45.796755 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.27s 2026-04-09 05:00:45.796769 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 2.22s 2026-04-09 05:00:45.998112 | orchestrator | + osism apply -a upgrade openvswitch 2026-04-09 05:00:47.368820 | orchestrator | 2026-04-09 05:00:47 | INFO  | Prepare task for execution of openvswitch. 2026-04-09 05:00:47.434804 | orchestrator | 2026-04-09 05:00:47 | INFO  | Task f47e0164-ff98-4417-b992-3f06f73d2393 (openvswitch) was prepared for execution. 2026-04-09 05:00:47.434916 | orchestrator | 2026-04-09 05:00:47 | INFO  | It takes a moment until task f47e0164-ff98-4417-b992-3f06f73d2393 (openvswitch) has been started and output is visible here. 2026-04-09 05:01:12.413028 | orchestrator | 2026-04-09 05:01:12.413189 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 05:01:12.413209 | orchestrator | 2026-04-09 05:01:12.413222 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 05:01:12.413234 | orchestrator | Thursday 09 April 2026 05:00:52 +0000 (0:00:01.676) 0:00:01.676 ******** 2026-04-09 05:01:12.413245 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:01:12.413257 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:01:12.413268 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:01:12.413279 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:01:12.413290 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:01:12.413301 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:01:12.413313 | orchestrator | 2026-04-09 05:01:12.413325 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 05:01:12.413337 | orchestrator | Thursday 09 April 2026 05:00:54 +0000 (0:00:02.482) 0:00:04.159 ******** 2026-04-09 05:01:12.413348 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 05:01:12.413360 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 05:01:12.413371 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 05:01:12.413383 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 05:01:12.413418 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 05:01:12.413430 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 05:01:12.413442 | orchestrator | 2026-04-09 05:01:12.413453 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-09 05:01:12.413464 | orchestrator | 2026-04-09 05:01:12.413475 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-09 05:01:12.413486 | orchestrator | Thursday 09 April 2026 05:00:57 +0000 (0:00:02.421) 0:00:06.580 ******** 2026-04-09 05:01:12.413499 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 05:01:12.413512 | orchestrator | 2026-04-09 05:01:12.413523 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-09 05:01:12.413534 | orchestrator | Thursday 09 April 2026 05:01:00 +0000 (0:00:03.347) 0:00:09.928 ******** 2026-04-09 05:01:12.413546 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-04-09 05:01:12.413557 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-04-09 05:01:12.413576 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-04-09 05:01:12.413595 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-04-09 05:01:12.413614 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-04-09 05:01:12.413634 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-04-09 05:01:12.413651 | orchestrator | 2026-04-09 05:01:12.413668 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-09 05:01:12.413686 | orchestrator | Thursday 09 April 2026 05:01:03 +0000 (0:00:03.077) 0:00:13.005 ******** 2026-04-09 05:01:12.413703 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-04-09 05:01:12.413721 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-04-09 05:01:12.413739 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-04-09 05:01:12.413757 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-04-09 05:01:12.413774 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-04-09 05:01:12.413791 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-04-09 05:01:12.413807 | orchestrator | 2026-04-09 05:01:12.413824 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-09 05:01:12.413843 | orchestrator | Thursday 09 April 2026 05:01:06 +0000 (0:00:03.013) 0:00:16.018 ******** 2026-04-09 05:01:12.413862 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-09 05:01:12.413882 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:01:12.413902 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-09 05:01:12.413920 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:01:12.413935 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-09 05:01:12.413952 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:01:12.413970 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-09 05:01:12.413988 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:01:12.414005 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-09 05:01:12.414182 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:01:12.414200 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-09 05:01:12.414211 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:01:12.414223 | orchestrator | 2026-04-09 05:01:12.414234 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-09 05:01:12.414246 | orchestrator | Thursday 09 April 2026 05:01:09 +0000 (0:00:02.340) 0:00:18.358 ******** 2026-04-09 05:01:12.414257 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:01:12.414268 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:01:12.414282 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:01:12.414300 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:01:12.414318 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:01:12.414337 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:01:12.414374 | orchestrator | 2026-04-09 05:01:12.414386 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-09 05:01:12.414397 | orchestrator | Thursday 09 April 2026 05:01:11 +0000 (0:00:02.315) 0:00:20.674 ******** 2026-04-09 05:01:12.414451 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 05:01:12.414469 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 05:01:12.414488 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 05:01:12.414507 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 05:01:12.414526 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 05:01:12.414551 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 05:01:12.414596 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 05:01:15.911314 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 05:01:15.911437 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 05:01:15.911466 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 05:01:15.911487 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 05:01:15.911541 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 05:01:15.911555 | orchestrator | 2026-04-09 05:01:15.911569 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-09 05:01:15.911581 | orchestrator | Thursday 09 April 2026 05:01:13 +0000 (0:00:02.530) 0:00:23.205 ******** 2026-04-09 05:01:15.911612 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 05:01:15.911626 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 05:01:15.911638 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 05:01:15.911649 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 05:01:15.911673 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 05:01:15.911686 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 05:01:15.911707 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 05:01:21.572903 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 05:01:21.573025 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 05:01:21.573059 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 05:01:21.573215 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 05:01:21.573245 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 05:01:21.573266 | orchestrator | 2026-04-09 05:01:21.573287 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-09 05:01:21.573307 | orchestrator | Thursday 09 April 2026 05:01:17 +0000 (0:00:03.677) 0:00:26.883 ******** 2026-04-09 05:01:21.573319 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:01:21.573331 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:01:21.573343 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:01:21.573354 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:01:21.573365 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:01:21.573375 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:01:21.573386 | orchestrator | 2026-04-09 05:01:21.573398 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-04-09 05:01:21.573441 | orchestrator | Thursday 09 April 2026 05:01:19 +0000 (0:00:02.388) 0:00:29.271 ******** 2026-04-09 05:01:21.573456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 05:01:21.573472 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 05:01:21.573498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 05:01:21.573518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 05:01:21.573533 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 05:01:21.573555 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 05:01:25.912633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 05:01:25.912783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 05:01:25.912802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 05:01:25.912830 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 05:01:25.912842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 05:01:25.912874 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 05:01:25.912887 | orchestrator | 2026-04-09 05:01:25.912900 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-04-09 05:01:25.912913 | orchestrator | Thursday 09 April 2026 05:01:23 +0000 (0:00:03.521) 0:00:32.792 ******** 2026-04-09 05:01:25.912933 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 05:01:25.912945 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 05:01:25.912957 | orchestrator | } 2026-04-09 05:01:25.912969 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 05:01:25.912980 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 05:01:25.912991 | orchestrator | } 2026-04-09 05:01:25.913002 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 05:01:25.913013 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 05:01:25.913025 | orchestrator | } 2026-04-09 05:01:25.913036 | orchestrator | changed: [testbed-node-3] => { 2026-04-09 05:01:25.913047 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 05:01:25.913058 | orchestrator | } 2026-04-09 05:01:25.913069 | orchestrator | changed: [testbed-node-4] => { 2026-04-09 05:01:25.913080 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 05:01:25.913091 | orchestrator | } 2026-04-09 05:01:25.913140 | orchestrator | changed: [testbed-node-5] => { 2026-04-09 05:01:25.913153 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 05:01:25.913164 | orchestrator | } 2026-04-09 05:01:25.913175 | orchestrator | 2026-04-09 05:01:25.913187 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 05:01:25.913198 | orchestrator | Thursday 09 April 2026 05:01:25 +0000 (0:00:01.895) 0:00:34.688 ******** 2026-04-09 05:01:25.913210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-09 05:01:25.913230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-09 05:01:25.913243 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:01:25.913255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-09 05:01:25.913267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-09 05:01:25.913295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-09 05:02:00.532946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-09 05:02:00.533169 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:02:00.533203 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:02:00.533216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-09 05:02:00.533250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-09 05:02:00.533264 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:02:00.533275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-09 05:02:00.533312 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-09 05:02:00.533325 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:02:00.533358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-09 05:02:00.533371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-09 05:02:00.533383 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:02:00.533394 | orchestrator | 2026-04-09 05:02:00.533407 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 05:02:00.533420 | orchestrator | Thursday 09 April 2026 05:01:28 +0000 (0:00:02.677) 0:00:37.365 ******** 2026-04-09 05:02:00.533432 | orchestrator | 2026-04-09 05:02:00.533446 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 05:02:00.533460 | orchestrator | Thursday 09 April 2026 05:01:28 +0000 (0:00:00.692) 0:00:38.058 ******** 2026-04-09 05:02:00.533473 | orchestrator | 2026-04-09 05:02:00.533486 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 05:02:00.533504 | orchestrator | Thursday 09 April 2026 05:01:29 +0000 (0:00:00.582) 0:00:38.640 ******** 2026-04-09 05:02:00.533517 | orchestrator | 2026-04-09 05:02:00.533532 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 05:02:00.533545 | orchestrator | Thursday 09 April 2026 05:01:29 +0000 (0:00:00.520) 0:00:39.161 ******** 2026-04-09 05:02:00.533556 | orchestrator | 2026-04-09 05:02:00.533567 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 05:02:00.533578 | orchestrator | Thursday 09 April 2026 05:01:30 +0000 (0:00:00.541) 0:00:39.703 ******** 2026-04-09 05:02:00.533589 | orchestrator | 2026-04-09 05:02:00.533601 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 05:02:00.533622 | orchestrator | Thursday 09 April 2026 05:01:30 +0000 (0:00:00.519) 0:00:40.223 ******** 2026-04-09 05:02:00.533633 | orchestrator | 2026-04-09 05:02:00.533645 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-09 05:02:00.533656 | orchestrator | Thursday 09 April 2026 05:01:31 +0000 (0:00:00.912) 0:00:41.136 ******** 2026-04-09 05:02:00.533667 | orchestrator | changed: [testbed-node-4] 2026-04-09 05:02:00.533678 | orchestrator | changed: [testbed-node-3] 2026-04-09 05:02:00.533690 | orchestrator | changed: [testbed-node-5] 2026-04-09 05:02:00.533701 | orchestrator | changed: [testbed-node-1] 2026-04-09 05:02:00.533712 | orchestrator | changed: [testbed-node-2] 2026-04-09 05:02:00.533723 | orchestrator | changed: [testbed-node-0] 2026-04-09 05:02:00.533734 | orchestrator | 2026-04-09 05:02:00.533746 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-09 05:02:00.533758 | orchestrator | Thursday 09 April 2026 05:01:44 +0000 (0:00:12.232) 0:00:53.369 ******** 2026-04-09 05:02:00.533769 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:02:00.533781 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:02:00.533792 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:02:00.533803 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:02:00.533814 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:02:00.533825 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:02:00.533836 | orchestrator | 2026-04-09 05:02:00.533848 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-09 05:02:00.533859 | orchestrator | Thursday 09 April 2026 05:01:46 +0000 (0:00:02.405) 0:00:55.774 ******** 2026-04-09 05:02:00.533870 | orchestrator | changed: [testbed-node-3] 2026-04-09 05:02:00.533881 | orchestrator | changed: [testbed-node-5] 2026-04-09 05:02:00.533892 | orchestrator | changed: [testbed-node-4] 2026-04-09 05:02:00.533904 | orchestrator | changed: [testbed-node-2] 2026-04-09 05:02:00.533915 | orchestrator | changed: [testbed-node-1] 2026-04-09 05:02:00.533926 | orchestrator | changed: [testbed-node-0] 2026-04-09 05:02:00.533937 | orchestrator | 2026-04-09 05:02:00.533948 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-09 05:02:00.533959 | orchestrator | Thursday 09 April 2026 05:01:57 +0000 (0:00:11.276) 0:01:07.051 ******** 2026-04-09 05:02:00.533970 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-09 05:02:00.533982 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-09 05:02:00.533994 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-09 05:02:00.534005 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-09 05:02:00.534085 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-09 05:02:00.534139 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-09 05:02:13.711676 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-09 05:02:13.711839 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-09 05:02:13.711855 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-09 05:02:13.711868 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-09 05:02:13.711879 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-09 05:02:13.711891 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-09 05:02:13.711903 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-09 05:02:13.711943 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-09 05:02:13.711956 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-09 05:02:13.711966 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-09 05:02:13.711977 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-09 05:02:13.711988 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-09 05:02:13.712000 | orchestrator | 2026-04-09 05:02:13.712013 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-09 05:02:13.712026 | orchestrator | Thursday 09 April 2026 05:02:05 +0000 (0:00:07.772) 0:01:14.824 ******** 2026-04-09 05:02:13.712055 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-09 05:02:13.712068 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:02:13.712080 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-09 05:02:13.712090 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:02:13.712101 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-09 05:02:13.712112 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:02:13.712145 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-04-09 05:02:13.712157 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-04-09 05:02:13.712168 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-04-09 05:02:13.712180 | orchestrator | 2026-04-09 05:02:13.712194 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-09 05:02:13.712207 | orchestrator | Thursday 09 April 2026 05:02:09 +0000 (0:00:03.519) 0:01:18.343 ******** 2026-04-09 05:02:13.712220 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-09 05:02:13.712232 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:02:13.712246 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-09 05:02:13.712258 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:02:13.712271 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-09 05:02:13.712284 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:02:13.712297 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-09 05:02:13.712310 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-09 05:02:13.712323 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-09 05:02:13.712337 | orchestrator | 2026-04-09 05:02:13.712349 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 05:02:13.712364 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 05:02:13.712379 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 05:02:13.712392 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 05:02:13.712405 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 05:02:13.712419 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 05:02:13.712431 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 05:02:13.712444 | orchestrator | 2026-04-09 05:02:13.712458 | orchestrator | 2026-04-09 05:02:13.712479 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 05:02:13.712492 | orchestrator | Thursday 09 April 2026 05:02:13 +0000 (0:00:04.240) 0:01:22.584 ******** 2026-04-09 05:02:13.712505 | orchestrator | =============================================================================== 2026-04-09 05:02:13.712519 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 12.23s 2026-04-09 05:02:13.712553 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 11.28s 2026-04-09 05:02:13.712565 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.77s 2026-04-09 05:02:13.712576 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.24s 2026-04-09 05:02:13.712587 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 3.77s 2026-04-09 05:02:13.712598 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.68s 2026-04-09 05:02:13.712609 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.52s 2026-04-09 05:02:13.712620 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.52s 2026-04-09 05:02:13.712631 | orchestrator | openvswitch : include_tasks --------------------------------------------- 3.35s 2026-04-09 05:02:13.712642 | orchestrator | module-load : Load modules ---------------------------------------------- 3.08s 2026-04-09 05:02:13.712652 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 3.01s 2026-04-09 05:02:13.712663 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.68s 2026-04-09 05:02:13.712674 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.53s 2026-04-09 05:02:13.712685 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.48s 2026-04-09 05:02:13.712696 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.42s 2026-04-09 05:02:13.712707 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.41s 2026-04-09 05:02:13.712718 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.39s 2026-04-09 05:02:13.712729 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.34s 2026-04-09 05:02:13.712739 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 2.32s 2026-04-09 05:02:13.712750 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 1.90s 2026-04-09 05:02:13.926858 | orchestrator | + osism apply -a upgrade ovn 2026-04-09 05:02:15.281471 | orchestrator | 2026-04-09 05:02:15 | INFO  | Prepare task for execution of ovn. 2026-04-09 05:02:15.351039 | orchestrator | 2026-04-09 05:02:15 | INFO  | Task 7aad4d77-76e5-4bd8-abfb-0b08ba3da0dd (ovn) was prepared for execution. 2026-04-09 05:02:15.351224 | orchestrator | 2026-04-09 05:02:15 | INFO  | It takes a moment until task 7aad4d77-76e5-4bd8-abfb-0b08ba3da0dd (ovn) has been started and output is visible here. 2026-04-09 05:02:36.862557 | orchestrator | 2026-04-09 05:02:36.862682 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 05:02:36.862699 | orchestrator | 2026-04-09 05:02:36.862711 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 05:02:36.862723 | orchestrator | Thursday 09 April 2026 05:02:20 +0000 (0:00:01.479) 0:00:01.479 ******** 2026-04-09 05:02:36.862734 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:02:36.862746 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:02:36.862758 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:02:36.862768 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:02:36.862780 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:02:36.862791 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:02:36.862802 | orchestrator | 2026-04-09 05:02:36.862813 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 05:02:36.862824 | orchestrator | Thursday 09 April 2026 05:02:23 +0000 (0:00:03.404) 0:00:04.884 ******** 2026-04-09 05:02:36.862835 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-09 05:02:36.862870 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-09 05:02:36.862882 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-09 05:02:36.862893 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-09 05:02:36.862904 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-09 05:02:36.862914 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-09 05:02:36.862925 | orchestrator | 2026-04-09 05:02:36.862936 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-09 05:02:36.862947 | orchestrator | 2026-04-09 05:02:36.862958 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-09 05:02:36.862969 | orchestrator | Thursday 09 April 2026 05:02:26 +0000 (0:00:02.728) 0:00:07.612 ******** 2026-04-09 05:02:36.862981 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 05:02:36.862994 | orchestrator | 2026-04-09 05:02:36.863005 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-09 05:02:36.863016 | orchestrator | Thursday 09 April 2026 05:02:30 +0000 (0:00:04.170) 0:00:11.783 ******** 2026-04-09 05:02:36.863029 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:36.863043 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:36.863054 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:36.863066 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:36.863077 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:36.863121 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:36.863145 | orchestrator | 2026-04-09 05:02:36.863183 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-09 05:02:36.863197 | orchestrator | Thursday 09 April 2026 05:02:33 +0000 (0:00:02.828) 0:00:14.611 ******** 2026-04-09 05:02:36.863211 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:36.863225 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:36.863238 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:36.863251 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:36.863264 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:36.863278 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:36.863290 | orchestrator | 2026-04-09 05:02:36.863303 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-09 05:02:36.863317 | orchestrator | Thursday 09 April 2026 05:02:36 +0000 (0:00:02.976) 0:00:17.588 ******** 2026-04-09 05:02:36.863331 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:36.863349 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:36.863378 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:46.458437 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:46.458570 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:46.458595 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:46.458607 | orchestrator | 2026-04-09 05:02:46.458619 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-09 05:02:46.458631 | orchestrator | Thursday 09 April 2026 05:02:38 +0000 (0:00:02.174) 0:00:19.763 ******** 2026-04-09 05:02:46.458641 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:46.458653 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:46.458663 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:46.458673 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:46.458721 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:46.458750 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:46.458761 | orchestrator | 2026-04-09 05:02:46.458772 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-04-09 05:02:46.458781 | orchestrator | Thursday 09 April 2026 05:02:41 +0000 (0:00:03.283) 0:00:23.046 ******** 2026-04-09 05:02:46.458792 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:46.458806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:46.458816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:46.458826 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:46.458836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:46.458846 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:02:46.458863 | orchestrator | 2026-04-09 05:02:46.458873 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-04-09 05:02:46.458884 | orchestrator | Thursday 09 April 2026 05:02:44 +0000 (0:00:02.863) 0:00:25.909 ******** 2026-04-09 05:02:46.458894 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 05:02:46.458905 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 05:02:46.458915 | orchestrator | } 2026-04-09 05:02:46.458925 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 05:02:46.458935 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 05:02:46.458945 | orchestrator | } 2026-04-09 05:02:46.458957 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 05:02:46.458969 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 05:02:46.458980 | orchestrator | } 2026-04-09 05:02:46.458996 | orchestrator | changed: [testbed-node-3] => { 2026-04-09 05:02:46.459009 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 05:02:46.459020 | orchestrator | } 2026-04-09 05:02:46.459032 | orchestrator | changed: [testbed-node-4] => { 2026-04-09 05:02:46.459043 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 05:02:46.459055 | orchestrator | } 2026-04-09 05:02:46.459067 | orchestrator | changed: [testbed-node-5] => { 2026-04-09 05:02:46.459078 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 05:02:46.459090 | orchestrator | } 2026-04-09 05:02:46.459101 | orchestrator | 2026-04-09 05:02:46.459112 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 05:02:46.459123 | orchestrator | Thursday 09 April 2026 05:02:46 +0000 (0:00:01.825) 0:00:27.734 ******** 2026-04-09 05:02:46.459143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 05:03:15.740437 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:03:15.740571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 05:03:15.740595 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:03:15.740608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 05:03:15.740620 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:03:15.740632 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 05:03:15.740644 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:03:15.740655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 05:03:15.740692 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:03:15.740704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 05:03:15.740715 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:03:15.740727 | orchestrator | 2026-04-09 05:03:15.740739 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-09 05:03:15.740752 | orchestrator | Thursday 09 April 2026 05:02:48 +0000 (0:00:02.443) 0:00:30.178 ******** 2026-04-09 05:03:15.740763 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:03:15.740775 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:03:15.740786 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:03:15.740797 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:03:15.740807 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:03:15.740818 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:03:15.740829 | orchestrator | 2026-04-09 05:03:15.740841 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-09 05:03:15.740851 | orchestrator | Thursday 09 April 2026 05:02:53 +0000 (0:00:04.979) 0:00:35.157 ******** 2026-04-09 05:03:15.740863 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-09 05:03:15.740889 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-09 05:03:15.740900 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-09 05:03:15.740911 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-09 05:03:15.740922 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-09 05:03:15.740933 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-09 05:03:15.740947 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-09 05:03:15.740960 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-09 05:03:15.740972 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-09 05:03:15.740985 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-09 05:03:15.740998 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-09 05:03:15.741030 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-09 05:03:15.741043 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-09 05:03:15.741057 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-09 05:03:15.741070 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-09 05:03:15.741083 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-09 05:03:15.741104 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-09 05:03:15.741117 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-09 05:03:15.741130 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-09 05:03:15.741143 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-09 05:03:15.741156 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-09 05:03:15.741170 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-09 05:03:15.741184 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-09 05:03:15.741224 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-09 05:03:15.741237 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-09 05:03:15.741250 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-09 05:03:15.741263 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-09 05:03:15.741276 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-09 05:03:15.741289 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-09 05:03:15.741302 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-09 05:03:15.741315 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-09 05:03:15.741326 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-09 05:03:15.741337 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-09 05:03:15.741348 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-09 05:03:15.741359 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-09 05:03:15.741370 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-09 05:03:15.741383 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-09 05:03:15.741394 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-09 05:03:15.741405 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-09 05:03:15.741421 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-09 05:03:15.741432 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-09 05:03:15.741444 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-09 05:03:15.741455 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-09 05:03:15.741466 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-09 05:03:15.741477 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-09 05:03:15.741495 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-09 05:03:15.741514 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-09 05:08:26.583225 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-09 05:08:26.583338 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-09 05:08:26.583354 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-09 05:08:26.583366 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-09 05:08:26.583378 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-09 05:08:26.583390 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-09 05:08:26.583403 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-09 05:08:26.583415 | orchestrator | 2026-04-09 05:08:26.583427 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-09 05:08:26.583438 | orchestrator | Thursday 09 April 2026 05:03:29 +0000 (0:00:35.610) 0:01:10.768 ******** 2026-04-09 05:08:26.583450 | orchestrator | 2026-04-09 05:08:26.583461 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-09 05:08:26.583473 | orchestrator | Thursday 09 April 2026 05:03:29 +0000 (0:00:00.431) 0:01:11.200 ******** 2026-04-09 05:08:26.583484 | orchestrator | 2026-04-09 05:08:26.583495 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-09 05:08:26.583506 | orchestrator | Thursday 09 April 2026 05:03:30 +0000 (0:00:00.466) 0:01:11.666 ******** 2026-04-09 05:08:26.583517 | orchestrator | 2026-04-09 05:08:26.583528 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-09 05:08:26.583539 | orchestrator | Thursday 09 April 2026 05:03:30 +0000 (0:00:00.652) 0:01:12.319 ******** 2026-04-09 05:08:26.583550 | orchestrator | 2026-04-09 05:08:26.583561 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-09 05:08:26.583573 | orchestrator | Thursday 09 April 2026 05:03:31 +0000 (0:00:00.442) 0:01:12.761 ******** 2026-04-09 05:08:26.583584 | orchestrator | 2026-04-09 05:08:26.583669 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-09 05:08:26.583684 | orchestrator | Thursday 09 April 2026 05:03:31 +0000 (0:00:00.474) 0:01:13.236 ******** 2026-04-09 05:08:26.583695 | orchestrator | 2026-04-09 05:08:26.583707 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-09 05:08:26.583719 | orchestrator | Thursday 09 April 2026 05:03:32 +0000 (0:00:00.829) 0:01:14.065 ******** 2026-04-09 05:08:26.583730 | orchestrator | changed: [testbed-node-3] 2026-04-09 05:08:26.583742 | orchestrator | changed: [testbed-node-4] 2026-04-09 05:08:26.583754 | orchestrator | changed: [testbed-node-1] 2026-04-09 05:08:26.583767 | orchestrator | changed: [testbed-node-0] 2026-04-09 05:08:26.583781 | orchestrator | changed: [testbed-node-2] 2026-04-09 05:08:26.583794 | orchestrator | 2026-04-09 05:08:26.583809 | orchestrator | STILL ALIVE [task 'ovn-controller : Restart ovn-controller container' is running] *** 2026-04-09 05:08:26.583822 | orchestrator | changed: [testbed-node-5] 2026-04-09 05:08:26.583836 | orchestrator | 2026-04-09 05:08:26.583850 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-09 05:08:26.583863 | orchestrator | 2026-04-09 05:08:26.583876 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-09 05:08:26.583889 | orchestrator | Thursday 09 April 2026 05:07:52 +0000 (0:04:19.808) 0:05:33.874 ******** 2026-04-09 05:08:26.583903 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 05:08:26.583940 | orchestrator | 2026-04-09 05:08:26.583955 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-09 05:08:26.583969 | orchestrator | Thursday 09 April 2026 05:07:54 +0000 (0:00:01.698) 0:05:35.572 ******** 2026-04-09 05:08:26.583983 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 05:08:26.583997 | orchestrator | 2026-04-09 05:08:26.584011 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-09 05:08:26.584024 | orchestrator | Thursday 09 April 2026 05:07:56 +0000 (0:00:01.890) 0:05:37.463 ******** 2026-04-09 05:08:26.584037 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:08:26.584051 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:08:26.584065 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:08:26.584077 | orchestrator | 2026-04-09 05:08:26.584091 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-09 05:08:26.584105 | orchestrator | Thursday 09 April 2026 05:07:57 +0000 (0:00:01.806) 0:05:39.269 ******** 2026-04-09 05:08:26.584118 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:08:26.584129 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:08:26.584140 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:08:26.584151 | orchestrator | 2026-04-09 05:08:26.584163 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-09 05:08:26.584175 | orchestrator | Thursday 09 April 2026 05:07:59 +0000 (0:00:01.366) 0:05:40.635 ******** 2026-04-09 05:08:26.584186 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:08:26.584197 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:08:26.584208 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:08:26.584219 | orchestrator | 2026-04-09 05:08:26.584231 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-09 05:08:26.584242 | orchestrator | Thursday 09 April 2026 05:08:00 +0000 (0:00:01.533) 0:05:42.169 ******** 2026-04-09 05:08:26.584253 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:08:26.584264 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:08:26.584275 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:08:26.584286 | orchestrator | 2026-04-09 05:08:26.584297 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-09 05:08:26.584309 | orchestrator | Thursday 09 April 2026 05:08:02 +0000 (0:00:01.384) 0:05:43.553 ******** 2026-04-09 05:08:26.584336 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:08:26.584349 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:08:26.584360 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:08:26.584371 | orchestrator | 2026-04-09 05:08:26.584382 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-09 05:08:26.584394 | orchestrator | Thursday 09 April 2026 05:08:03 +0000 (0:00:01.510) 0:05:45.064 ******** 2026-04-09 05:08:26.584405 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:08:26.584416 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:08:26.584427 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:08:26.584438 | orchestrator | 2026-04-09 05:08:26.584450 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-09 05:08:26.584461 | orchestrator | Thursday 09 April 2026 05:08:05 +0000 (0:00:01.519) 0:05:46.583 ******** 2026-04-09 05:08:26.584472 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:08:26.584483 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:08:26.584495 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:08:26.584506 | orchestrator | 2026-04-09 05:08:26.584517 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-09 05:08:26.584528 | orchestrator | Thursday 09 April 2026 05:08:07 +0000 (0:00:01.870) 0:05:48.453 ******** 2026-04-09 05:08:26.584539 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:08:26.584551 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:08:26.584562 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:08:26.584573 | orchestrator | 2026-04-09 05:08:26.584651 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-09 05:08:26.584675 | orchestrator | Thursday 09 April 2026 05:08:08 +0000 (0:00:01.429) 0:05:49.883 ******** 2026-04-09 05:08:26.584703 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:08:26.584714 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:08:26.584725 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:08:26.584736 | orchestrator | 2026-04-09 05:08:26.584747 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-09 05:08:26.584758 | orchestrator | Thursday 09 April 2026 05:08:10 +0000 (0:00:01.864) 0:05:51.747 ******** 2026-04-09 05:08:26.584769 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:08:26.584780 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:08:26.584791 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:08:26.584802 | orchestrator | 2026-04-09 05:08:26.584813 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-09 05:08:26.584824 | orchestrator | Thursday 09 April 2026 05:08:11 +0000 (0:00:01.619) 0:05:53.367 ******** 2026-04-09 05:08:26.584835 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:08:26.584846 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:08:26.584856 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:08:26.584867 | orchestrator | 2026-04-09 05:08:26.584878 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-09 05:08:26.584889 | orchestrator | Thursday 09 April 2026 05:08:13 +0000 (0:00:01.360) 0:05:54.728 ******** 2026-04-09 05:08:26.584900 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:08:26.584911 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:08:26.584921 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:08:26.584932 | orchestrator | 2026-04-09 05:08:26.584943 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-09 05:08:26.584954 | orchestrator | Thursday 09 April 2026 05:08:14 +0000 (0:00:01.336) 0:05:56.065 ******** 2026-04-09 05:08:26.584965 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:08:26.584975 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:08:26.584986 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:08:26.584997 | orchestrator | 2026-04-09 05:08:26.585008 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-09 05:08:26.585019 | orchestrator | Thursday 09 April 2026 05:08:16 +0000 (0:00:01.975) 0:05:58.041 ******** 2026-04-09 05:08:26.585030 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:08:26.585041 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:08:26.585052 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:08:26.585062 | orchestrator | 2026-04-09 05:08:26.585073 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-09 05:08:26.585084 | orchestrator | Thursday 09 April 2026 05:08:18 +0000 (0:00:01.396) 0:05:59.437 ******** 2026-04-09 05:08:26.585095 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:08:26.585106 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:08:26.585116 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:08:26.585127 | orchestrator | 2026-04-09 05:08:26.585138 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-09 05:08:26.585149 | orchestrator | Thursday 09 April 2026 05:08:19 +0000 (0:00:01.920) 0:06:01.357 ******** 2026-04-09 05:08:26.585160 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:08:26.585171 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:08:26.585181 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:08:26.585192 | orchestrator | 2026-04-09 05:08:26.585204 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-09 05:08:26.585214 | orchestrator | Thursday 09 April 2026 05:08:21 +0000 (0:00:01.417) 0:06:02.774 ******** 2026-04-09 05:08:26.585230 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:08:26.585242 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:08:26.585252 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:08:26.585263 | orchestrator | 2026-04-09 05:08:26.585274 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-09 05:08:26.585285 | orchestrator | Thursday 09 April 2026 05:08:23 +0000 (0:00:01.645) 0:06:04.420 ******** 2026-04-09 05:08:26.585296 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:08:26.585307 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:08:26.585326 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:08:26.585338 | orchestrator | 2026-04-09 05:08:26.585348 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-09 05:08:26.585359 | orchestrator | Thursday 09 April 2026 05:08:24 +0000 (0:00:01.758) 0:06:06.179 ******** 2026-04-09 05:08:26.585382 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:32.832019 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:32.832132 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:32.832149 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:32.832163 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:32.832175 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:32.832203 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:32.832237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 05:08:32.832268 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:32.832282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 05:08:32.832293 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:32.832305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 05:08:32.832318 | orchestrator | 2026-04-09 05:08:32.832336 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-09 05:08:32.832357 | orchestrator | Thursday 09 April 2026 05:08:28 +0000 (0:00:03.875) 0:06:10.054 ******** 2026-04-09 05:08:32.832377 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:32.832397 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:32.832434 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:32.832452 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:32.832481 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:48.483429 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:48.483542 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:48.483559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 05:08:48.483573 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:48.483674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 05:08:48.483690 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:48.483701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 05:08:48.483713 | orchestrator | 2026-04-09 05:08:48.483727 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-04-09 05:08:48.483740 | orchestrator | Thursday 09 April 2026 05:08:34 +0000 (0:00:06.236) 0:06:16.291 ******** 2026-04-09 05:08:48.483752 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-04-09 05:08:48.483764 | orchestrator | 2026-04-09 05:08:48.483775 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-04-09 05:08:48.483786 | orchestrator | Thursday 09 April 2026 05:08:37 +0000 (0:00:02.323) 0:06:18.615 ******** 2026-04-09 05:08:48.483798 | orchestrator | changed: [testbed-node-0] 2026-04-09 05:08:48.483810 | orchestrator | changed: [testbed-node-1] 2026-04-09 05:08:48.483836 | orchestrator | changed: [testbed-node-2] 2026-04-09 05:08:48.483848 | orchestrator | 2026-04-09 05:08:48.483860 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-04-09 05:08:48.483872 | orchestrator | Thursday 09 April 2026 05:08:38 +0000 (0:00:01.715) 0:06:20.331 ******** 2026-04-09 05:08:48.483883 | orchestrator | changed: [testbed-node-1] 2026-04-09 05:08:48.483901 | orchestrator | changed: [testbed-node-0] 2026-04-09 05:08:48.483919 | orchestrator | changed: [testbed-node-2] 2026-04-09 05:08:48.483939 | orchestrator | 2026-04-09 05:08:48.483969 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-04-09 05:08:48.483989 | orchestrator | Thursday 09 April 2026 05:08:41 +0000 (0:00:02.870) 0:06:23.201 ******** 2026-04-09 05:08:48.484008 | orchestrator | changed: [testbed-node-0] 2026-04-09 05:08:48.484027 | orchestrator | changed: [testbed-node-1] 2026-04-09 05:08:48.484045 | orchestrator | changed: [testbed-node-2] 2026-04-09 05:08:48.484064 | orchestrator | 2026-04-09 05:08:48.484083 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-04-09 05:08:48.484104 | orchestrator | Thursday 09 April 2026 05:08:44 +0000 (0:00:02.677) 0:06:25.879 ******** 2026-04-09 05:08:48.484125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:48.484164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:48.484187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:48.484219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:48.484241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:48.484257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:48.484289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:53.494504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 05:08:53.495415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:53.495457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 05:08:53.495487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:08:53.495502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 05:08:53.495516 | orchestrator | 2026-04-09 05:08:53.495532 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-09 05:08:53.495546 | orchestrator | Thursday 09 April 2026 05:08:49 +0000 (0:00:05.248) 0:06:31.128 ******** 2026-04-09 05:08:53.495559 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 05:08:53.495571 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 05:08:53.495582 | orchestrator | } 2026-04-09 05:08:53.495595 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 05:08:53.495606 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 05:08:53.495660 | orchestrator | } 2026-04-09 05:08:53.495680 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 05:08:53.495699 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 05:08:53.495712 | orchestrator | } 2026-04-09 05:08:53.495724 | orchestrator | 2026-04-09 05:08:53.495735 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 05:08:53.495747 | orchestrator | Thursday 09 April 2026 05:08:51 +0000 (0:00:01.436) 0:06:32.565 ******** 2026-04-09 05:08:53.495759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 05:08:53.495795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 05:08:53.495820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 05:08:53.495833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 05:08:53.495845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 05:08:53.495862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 05:08:53.495874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 05:08:53.495886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 05:08:53.495897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 05:08:53.495925 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 05:10:50.025132 | orchestrator | 2026-04-09 05:10:50.025226 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-04-09 05:10:50.025236 | orchestrator | Thursday 09 April 2026 05:08:54 +0000 (0:00:03.462) 0:06:36.028 ******** 2026-04-09 05:10:50.025244 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-04-09 05:10:50.025251 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-04-09 05:10:50.025258 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-04-09 05:10:50.025264 | orchestrator | 2026-04-09 05:10:50.025270 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-09 05:10:50.025277 | orchestrator | Thursday 09 April 2026 05:09:18 +0000 (0:00:24.243) 0:07:00.271 ******** 2026-04-09 05:10:50.025284 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 05:10:50.025290 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 05:10:50.025296 | orchestrator | } 2026-04-09 05:10:50.025303 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 05:10:50.025309 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 05:10:50.025315 | orchestrator | } 2026-04-09 05:10:50.025321 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 05:10:50.025327 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 05:10:50.025333 | orchestrator | } 2026-04-09 05:10:50.025339 | orchestrator | 2026-04-09 05:10:50.025345 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-09 05:10:50.025351 | orchestrator | Thursday 09 April 2026 05:09:20 +0000 (0:00:01.529) 0:07:01.801 ******** 2026-04-09 05:10:50.025357 | orchestrator | 2026-04-09 05:10:50.025363 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-09 05:10:50.025369 | orchestrator | Thursday 09 April 2026 05:09:20 +0000 (0:00:00.436) 0:07:02.237 ******** 2026-04-09 05:10:50.025374 | orchestrator | 2026-04-09 05:10:50.025380 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-09 05:10:50.025386 | orchestrator | Thursday 09 April 2026 05:09:21 +0000 (0:00:00.477) 0:07:02.715 ******** 2026-04-09 05:10:50.025392 | orchestrator | 2026-04-09 05:10:50.025398 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-09 05:10:50.025404 | orchestrator | Thursday 09 April 2026 05:09:22 +0000 (0:00:00.812) 0:07:03.528 ******** 2026-04-09 05:10:50.025410 | orchestrator | changed: [testbed-node-1] 2026-04-09 05:10:50.025416 | orchestrator | changed: [testbed-node-2] 2026-04-09 05:10:50.025434 | orchestrator | changed: [testbed-node-0] 2026-04-09 05:10:50.025440 | orchestrator | 2026-04-09 05:10:50.025447 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-09 05:10:50.025452 | orchestrator | Thursday 09 April 2026 05:09:39 +0000 (0:00:16.996) 0:07:20.525 ******** 2026-04-09 05:10:50.025458 | orchestrator | changed: [testbed-node-2] 2026-04-09 05:10:50.025464 | orchestrator | changed: [testbed-node-1] 2026-04-09 05:10:50.025470 | orchestrator | changed: [testbed-node-0] 2026-04-09 05:10:50.025476 | orchestrator | 2026-04-09 05:10:50.025482 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-04-09 05:10:50.025488 | orchestrator | Thursday 09 April 2026 05:09:56 +0000 (0:00:16.907) 0:07:37.432 ******** 2026-04-09 05:10:50.025494 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-04-09 05:10:50.025517 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-04-09 05:10:50.025523 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-04-09 05:10:50.025529 | orchestrator | 2026-04-09 05:10:50.025535 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-09 05:10:50.025542 | orchestrator | Thursday 09 April 2026 05:10:12 +0000 (0:00:16.279) 0:07:53.711 ******** 2026-04-09 05:10:50.025547 | orchestrator | changed: [testbed-node-2] 2026-04-09 05:10:50.025553 | orchestrator | changed: [testbed-node-1] 2026-04-09 05:10:50.025559 | orchestrator | changed: [testbed-node-0] 2026-04-09 05:10:50.025565 | orchestrator | 2026-04-09 05:10:50.025571 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-09 05:10:50.025577 | orchestrator | Thursday 09 April 2026 05:10:29 +0000 (0:00:17.131) 0:08:10.843 ******** 2026-04-09 05:10:50.025583 | orchestrator | Pausing for 5 seconds 2026-04-09 05:10:50.025589 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:10:50.025595 | orchestrator | 2026-04-09 05:10:50.025601 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-09 05:10:50.025607 | orchestrator | Thursday 09 April 2026 05:10:35 +0000 (0:00:06.164) 0:08:17.008 ******** 2026-04-09 05:10:50.025612 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:10:50.025618 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:10:50.025624 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:10:50.025630 | orchestrator | 2026-04-09 05:10:50.025636 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-09 05:10:50.025642 | orchestrator | Thursday 09 April 2026 05:10:37 +0000 (0:00:01.874) 0:08:18.883 ******** 2026-04-09 05:10:50.025648 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:10:50.025654 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:10:50.025660 | orchestrator | changed: [testbed-node-0] 2026-04-09 05:10:50.025666 | orchestrator | 2026-04-09 05:10:50.025672 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-09 05:10:50.025678 | orchestrator | Thursday 09 April 2026 05:10:39 +0000 (0:00:01.786) 0:08:20.669 ******** 2026-04-09 05:10:50.025684 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:10:50.025690 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:10:50.025695 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:10:50.025701 | orchestrator | 2026-04-09 05:10:50.025707 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-09 05:10:50.025713 | orchestrator | Thursday 09 April 2026 05:10:41 +0000 (0:00:01.853) 0:08:22.522 ******** 2026-04-09 05:10:50.025738 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:10:50.025744 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:10:50.025749 | orchestrator | changed: [testbed-node-0] 2026-04-09 05:10:50.025755 | orchestrator | 2026-04-09 05:10:50.025761 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-09 05:10:50.025767 | orchestrator | Thursday 09 April 2026 05:10:42 +0000 (0:00:01.740) 0:08:24.263 ******** 2026-04-09 05:10:50.025773 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:10:50.025779 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:10:50.025785 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:10:50.025790 | orchestrator | 2026-04-09 05:10:50.025796 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-09 05:10:50.025813 | orchestrator | Thursday 09 April 2026 05:10:44 +0000 (0:00:01.812) 0:08:26.076 ******** 2026-04-09 05:10:50.025820 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:10:50.025826 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:10:50.025831 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:10:50.025837 | orchestrator | 2026-04-09 05:10:50.025843 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-04-09 05:10:50.025849 | orchestrator | Thursday 09 April 2026 05:10:46 +0000 (0:00:02.226) 0:08:28.303 ******** 2026-04-09 05:10:50.025855 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-04-09 05:10:50.025860 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-04-09 05:10:50.025866 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-04-09 05:10:50.025877 | orchestrator | 2026-04-09 05:10:50.025883 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 05:10:50.025890 | orchestrator | testbed-node-0 : ok=50  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 05:10:50.025897 | orchestrator | testbed-node-1 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-09 05:10:50.025903 | orchestrator | testbed-node-2 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-09 05:10:50.025909 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 05:10:50.025915 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 05:10:50.025921 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 05:10:50.025927 | orchestrator | 2026-04-09 05:10:50.025933 | orchestrator | 2026-04-09 05:10:50.025942 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 05:10:50.025948 | orchestrator | Thursday 09 April 2026 05:10:49 +0000 (0:00:02.677) 0:08:30.980 ******** 2026-04-09 05:10:50.025954 | orchestrator | =============================================================================== 2026-04-09 05:10:50.025960 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 259.81s 2026-04-09 05:10:50.025966 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 35.61s 2026-04-09 05:10:50.025972 | orchestrator | service-check-containers : ovn_db | Check containers with iteration ---- 24.24s 2026-04-09 05:10:50.025978 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 17.13s 2026-04-09 05:10:50.025983 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 17.00s 2026-04-09 05:10:50.025989 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 16.91s 2026-04-09 05:10:50.025995 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 16.28s 2026-04-09 05:10:50.026001 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.24s 2026-04-09 05:10:50.026007 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 6.16s 2026-04-09 05:10:50.026012 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.25s 2026-04-09 05:10:50.026058 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 4.98s 2026-04-09 05:10:50.026064 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 4.17s 2026-04-09 05:10:50.026070 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.88s 2026-04-09 05:10:50.026076 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.46s 2026-04-09 05:10:50.026082 | orchestrator | Group hosts based on Kolla action --------------------------------------- 3.41s 2026-04-09 05:10:50.026087 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 3.30s 2026-04-09 05:10:50.026093 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.28s 2026-04-09 05:10:50.026099 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.98s 2026-04-09 05:10:50.026105 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 2.87s 2026-04-09 05:10:50.026111 | orchestrator | service-check-containers : ovn_controller | Check containers ------------ 2.86s 2026-04-09 05:10:50.220190 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-09 05:10:50.220274 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-09 05:10:50.220288 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-04-09 05:10:50.227006 | orchestrator | + set -e 2026-04-09 05:10:50.227106 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 05:10:50.227121 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 05:10:50.227141 | orchestrator | ++ INTERACTIVE=false 2026-04-09 05:10:50.227166 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 05:10:50.227190 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 05:10:50.227216 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-04-09 05:10:51.576862 | orchestrator | 2026-04-09 05:10:51 | INFO  | Prepare task for execution of ceph-rolling_update. 2026-04-09 05:10:51.645472 | orchestrator | 2026-04-09 05:10:51 | INFO  | Task b6ab3f9b-c458-4cb1-a829-f6e666108111 (ceph-rolling_update) was prepared for execution. 2026-04-09 05:10:51.645558 | orchestrator | 2026-04-09 05:10:51 | INFO  | It takes a moment until task b6ab3f9b-c458-4cb1-a829-f6e666108111 (ceph-rolling_update) has been started and output is visible here. 2026-04-09 05:12:17.066894 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-09 05:12:17.067006 | orchestrator | 2.16.14 2026-04-09 05:12:17.067022 | orchestrator | 2026-04-09 05:12:17.067034 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-04-09 05:12:17.067045 | orchestrator | 2026-04-09 05:12:17.067056 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-04-09 05:12:17.067067 | orchestrator | Thursday 09 April 2026 05:10:59 +0000 (0:00:01.693) 0:00:01.693 ******** 2026-04-09 05:12:17.067077 | orchestrator | skipping: [localhost] 2026-04-09 05:12:17.067087 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-04-09 05:12:17.067097 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-04-09 05:12:17.067107 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-04-09 05:12:17.067117 | orchestrator | 2026-04-09 05:12:17.067127 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-04-09 05:12:17.067137 | orchestrator | 2026-04-09 05:12:17.067146 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-04-09 05:12:17.067156 | orchestrator | Thursday 09 April 2026 05:11:01 +0000 (0:00:02.162) 0:00:03.855 ******** 2026-04-09 05:12:17.067166 | orchestrator | ok: [testbed-node-0] => { 2026-04-09 05:12:17.067176 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-09 05:12:17.067187 | orchestrator | } 2026-04-09 05:12:17.067197 | orchestrator | ok: [testbed-node-1] => { 2026-04-09 05:12:17.067207 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-09 05:12:17.067216 | orchestrator | } 2026-04-09 05:12:17.067226 | orchestrator | ok: [testbed-node-2] => { 2026-04-09 05:12:17.067236 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-09 05:12:17.067245 | orchestrator | } 2026-04-09 05:12:17.067255 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 05:12:17.067265 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-09 05:12:17.067275 | orchestrator | } 2026-04-09 05:12:17.067284 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 05:12:17.067294 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-09 05:12:17.067303 | orchestrator | } 2026-04-09 05:12:17.067328 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 05:12:17.067338 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-09 05:12:17.067348 | orchestrator | } 2026-04-09 05:12:17.067358 | orchestrator | ok: [testbed-manager] => { 2026-04-09 05:12:17.067368 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-09 05:12:17.067377 | orchestrator | } 2026-04-09 05:12:17.067387 | orchestrator | 2026-04-09 05:12:17.067397 | orchestrator | TASK [Gather facts] ************************************************************ 2026-04-09 05:12:17.067407 | orchestrator | Thursday 09 April 2026 05:11:08 +0000 (0:00:06.756) 0:00:10.612 ******** 2026-04-09 05:12:17.067416 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:12:17.067449 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:12:17.067461 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:12:17.067472 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:12:17.067484 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:12:17.067494 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:12:17.067506 | orchestrator | ok: [testbed-manager] 2026-04-09 05:12:17.067518 | orchestrator | 2026-04-09 05:12:17.067530 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-04-09 05:12:17.067541 | orchestrator | Thursday 09 April 2026 05:11:14 +0000 (0:00:06.120) 0:00:16.732 ******** 2026-04-09 05:12:17.067553 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 05:12:17.067564 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:12:17.067576 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 05:12:17.067587 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 05:12:17.067598 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 05:12:17.067610 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 05:12:17.067621 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:12:17.067633 | orchestrator | 2026-04-09 05:12:17.067644 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-04-09 05:12:17.067656 | orchestrator | Thursday 09 April 2026 05:11:45 +0000 (0:00:30.552) 0:00:47.284 ******** 2026-04-09 05:12:17.067696 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:12:17.067709 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:12:17.067720 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:12:17.067731 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:12:17.067742 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:12:17.067753 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:12:17.067775 | orchestrator | ok: [testbed-manager] 2026-04-09 05:12:17.067787 | orchestrator | 2026-04-09 05:12:17.067799 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 05:12:17.067811 | orchestrator | Thursday 09 April 2026 05:11:47 +0000 (0:00:02.136) 0:00:49.420 ******** 2026-04-09 05:12:17.067823 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-09 05:12:17.067837 | orchestrator | 2026-04-09 05:12:17.067847 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-09 05:12:17.067856 | orchestrator | Thursday 09 April 2026 05:11:50 +0000 (0:00:02.914) 0:00:52.335 ******** 2026-04-09 05:12:17.067866 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:12:17.067876 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:12:17.067885 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:12:17.067894 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:12:17.067904 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:12:17.067913 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:12:17.067923 | orchestrator | ok: [testbed-manager] 2026-04-09 05:12:17.067933 | orchestrator | 2026-04-09 05:12:17.067960 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-09 05:12:17.067970 | orchestrator | Thursday 09 April 2026 05:11:53 +0000 (0:00:02.766) 0:00:55.102 ******** 2026-04-09 05:12:17.067980 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:12:17.067989 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:12:17.067999 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:12:17.068008 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:12:17.068018 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:12:17.068027 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:12:17.068037 | orchestrator | ok: [testbed-manager] 2026-04-09 05:12:17.068046 | orchestrator | 2026-04-09 05:12:17.068056 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 05:12:17.068074 | orchestrator | Thursday 09 April 2026 05:11:55 +0000 (0:00:02.135) 0:00:57.238 ******** 2026-04-09 05:12:17.068084 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:12:17.068093 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:12:17.068103 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:12:17.068112 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:12:17.068122 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:12:17.068131 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:12:17.068141 | orchestrator | ok: [testbed-manager] 2026-04-09 05:12:17.068150 | orchestrator | 2026-04-09 05:12:17.068160 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 05:12:17.068170 | orchestrator | Thursday 09 April 2026 05:11:58 +0000 (0:00:02.700) 0:00:59.938 ******** 2026-04-09 05:12:17.068179 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:12:17.068189 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:12:17.068198 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:12:17.068207 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:12:17.068217 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:12:17.068226 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:12:17.068236 | orchestrator | ok: [testbed-manager] 2026-04-09 05:12:17.068246 | orchestrator | 2026-04-09 05:12:17.068255 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-09 05:12:17.068265 | orchestrator | Thursday 09 April 2026 05:12:00 +0000 (0:00:02.143) 0:01:02.082 ******** 2026-04-09 05:12:17.068275 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:12:17.068284 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:12:17.068293 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:12:17.068303 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:12:17.068312 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:12:17.068322 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:12:17.068336 | orchestrator | ok: [testbed-manager] 2026-04-09 05:12:17.068346 | orchestrator | 2026-04-09 05:12:17.068356 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-09 05:12:17.068365 | orchestrator | Thursday 09 April 2026 05:12:02 +0000 (0:00:02.177) 0:01:04.259 ******** 2026-04-09 05:12:17.068375 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:12:17.068384 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:12:17.068394 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:12:17.068403 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:12:17.068412 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:12:17.068422 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:12:17.068432 | orchestrator | ok: [testbed-manager] 2026-04-09 05:12:17.068441 | orchestrator | 2026-04-09 05:12:17.068451 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-09 05:12:17.068460 | orchestrator | Thursday 09 April 2026 05:12:04 +0000 (0:00:02.184) 0:01:06.443 ******** 2026-04-09 05:12:17.068470 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:12:17.068480 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:12:17.068489 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:12:17.068499 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:12:17.068509 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:12:17.068518 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:12:17.068528 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:12:17.068537 | orchestrator | 2026-04-09 05:12:17.068547 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-09 05:12:17.068556 | orchestrator | Thursday 09 April 2026 05:12:06 +0000 (0:00:02.313) 0:01:08.757 ******** 2026-04-09 05:12:17.068566 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:12:17.068576 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:12:17.068585 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:12:17.068595 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:12:17.068604 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:12:17.068613 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:12:17.068623 | orchestrator | ok: [testbed-manager] 2026-04-09 05:12:17.068633 | orchestrator | 2026-04-09 05:12:17.068643 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-09 05:12:17.068658 | orchestrator | Thursday 09 April 2026 05:12:09 +0000 (0:00:02.234) 0:01:10.992 ******** 2026-04-09 05:12:17.068692 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 05:12:17.068702 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:12:17.068712 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:12:17.068722 | orchestrator | 2026-04-09 05:12:17.068731 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-09 05:12:17.068741 | orchestrator | Thursday 09 April 2026 05:12:11 +0000 (0:00:01.899) 0:01:12.891 ******** 2026-04-09 05:12:17.068750 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:12:17.068760 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:12:17.068770 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:12:17.068780 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:12:17.068789 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:12:17.068799 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:12:17.068808 | orchestrator | ok: [testbed-manager] 2026-04-09 05:12:17.068818 | orchestrator | 2026-04-09 05:12:17.068827 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-09 05:12:17.068837 | orchestrator | Thursday 09 April 2026 05:12:13 +0000 (0:00:02.555) 0:01:15.446 ******** 2026-04-09 05:12:17.068847 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 05:12:17.068857 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:12:17.068866 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:12:17.068876 | orchestrator | 2026-04-09 05:12:17.068886 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-09 05:12:17.068896 | orchestrator | Thursday 09 April 2026 05:12:16 +0000 (0:00:03.325) 0:01:18.772 ******** 2026-04-09 05:12:17.068912 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 05:12:39.351435 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 05:12:39.351528 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 05:12:39.351537 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:12:39.351546 | orchestrator | 2026-04-09 05:12:39.351554 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-09 05:12:39.351563 | orchestrator | Thursday 09 April 2026 05:12:18 +0000 (0:00:01.394) 0:01:20.166 ******** 2026-04-09 05:12:39.351606 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 05:12:39.351616 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 05:12:39.351624 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 05:12:39.351631 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:12:39.351638 | orchestrator | 2026-04-09 05:12:39.351645 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-09 05:12:39.351652 | orchestrator | Thursday 09 April 2026 05:12:20 +0000 (0:00:01.914) 0:01:22.081 ******** 2026-04-09 05:12:39.351675 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:39.351702 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:39.351709 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:39.351716 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:12:39.351723 | orchestrator | 2026-04-09 05:12:39.351730 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-09 05:12:39.351737 | orchestrator | Thursday 09 April 2026 05:12:21 +0000 (0:00:01.167) 0:01:23.249 ******** 2026-04-09 05:12:39.351746 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '3b46de499f20', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-09 05:12:14.283773', 'end': '2026-04-09 05:12:14.335497', 'delta': '0:00:00.051724', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3b46de499f20'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-09 05:12:39.351767 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '344b9fc03006', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-09 05:12:15.117763', 'end': '2026-04-09 05:12:15.164965', 'delta': '0:00:00.047202', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['344b9fc03006'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-09 05:12:39.351774 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '66330ed4242e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-09 05:12:15.696840', 'end': '2026-04-09 05:12:15.741506', 'delta': '0:00:00.044666', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['66330ed4242e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-09 05:12:39.351781 | orchestrator | 2026-04-09 05:12:39.351788 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-09 05:12:39.351795 | orchestrator | Thursday 09 April 2026 05:12:22 +0000 (0:00:01.243) 0:01:24.492 ******** 2026-04-09 05:12:39.351802 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:12:39.351810 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:12:39.351817 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:12:39.351824 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:12:39.351836 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:12:39.351843 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:12:39.351850 | orchestrator | ok: [testbed-manager] 2026-04-09 05:12:39.351857 | orchestrator | 2026-04-09 05:12:39.351864 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-09 05:12:39.351871 | orchestrator | Thursday 09 April 2026 05:12:24 +0000 (0:00:02.126) 0:01:26.619 ******** 2026-04-09 05:12:39.351881 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:12:39.351888 | orchestrator | 2026-04-09 05:12:39.351895 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-09 05:12:39.351902 | orchestrator | Thursday 09 April 2026 05:12:26 +0000 (0:00:01.264) 0:01:27.884 ******** 2026-04-09 05:12:39.351909 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:12:39.351915 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:12:39.351922 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:12:39.351929 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:12:39.351936 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:12:39.351943 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:12:39.351949 | orchestrator | ok: [testbed-manager] 2026-04-09 05:12:39.351956 | orchestrator | 2026-04-09 05:12:39.351963 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-09 05:12:39.351971 | orchestrator | Thursday 09 April 2026 05:12:28 +0000 (0:00:02.124) 0:01:30.008 ******** 2026-04-09 05:12:39.351977 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:12:39.351984 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-09 05:12:39.351992 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-09 05:12:39.352002 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 05:12:39.352012 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-09 05:12:39.352021 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-09 05:12:39.352031 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-09 05:12:39.352041 | orchestrator | 2026-04-09 05:12:39.352051 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 05:12:39.352061 | orchestrator | Thursday 09 April 2026 05:12:31 +0000 (0:00:03.401) 0:01:33.409 ******** 2026-04-09 05:12:39.352070 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:12:39.352079 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:12:39.352090 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:12:39.352100 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:12:39.352109 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:12:39.352119 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:12:39.352129 | orchestrator | ok: [testbed-manager] 2026-04-09 05:12:39.352138 | orchestrator | 2026-04-09 05:12:39.352149 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-09 05:12:39.352158 | orchestrator | Thursday 09 April 2026 05:12:33 +0000 (0:00:02.239) 0:01:35.649 ******** 2026-04-09 05:12:39.352168 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:12:39.352178 | orchestrator | 2026-04-09 05:12:39.352188 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-09 05:12:39.352197 | orchestrator | Thursday 09 April 2026 05:12:34 +0000 (0:00:01.139) 0:01:36.789 ******** 2026-04-09 05:12:39.352208 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:12:39.352218 | orchestrator | 2026-04-09 05:12:39.352228 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 05:12:39.352238 | orchestrator | Thursday 09 April 2026 05:12:36 +0000 (0:00:01.202) 0:01:37.992 ******** 2026-04-09 05:12:39.352248 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:12:39.352258 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:12:39.352269 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:12:39.352276 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:12:39.352283 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:12:39.352289 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:12:39.352296 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:12:39.352309 | orchestrator | 2026-04-09 05:12:39.352316 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-09 05:12:39.352323 | orchestrator | Thursday 09 April 2026 05:12:38 +0000 (0:00:02.376) 0:01:40.369 ******** 2026-04-09 05:12:39.352330 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:12:39.352337 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:12:39.352343 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:12:39.352350 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:12:39.352357 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:12:39.352364 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:12:39.352374 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:12:51.252034 | orchestrator | 2026-04-09 05:12:51.252183 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-09 05:12:51.252201 | orchestrator | Thursday 09 April 2026 05:12:40 +0000 (0:00:01.949) 0:01:42.318 ******** 2026-04-09 05:12:51.252213 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:12:51.252226 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:12:51.252238 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:12:51.252249 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:12:51.252261 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:12:51.252272 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:12:51.252283 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:12:51.252293 | orchestrator | 2026-04-09 05:12:51.252305 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-09 05:12:51.252316 | orchestrator | Thursday 09 April 2026 05:12:42 +0000 (0:00:02.100) 0:01:44.419 ******** 2026-04-09 05:12:51.252327 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:12:51.252338 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:12:51.252349 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:12:51.252360 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:12:51.252371 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:12:51.252382 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:12:51.252393 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:12:51.252404 | orchestrator | 2026-04-09 05:12:51.252415 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-09 05:12:51.252426 | orchestrator | Thursday 09 April 2026 05:12:44 +0000 (0:00:02.004) 0:01:46.423 ******** 2026-04-09 05:12:51.252437 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:12:51.252448 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:12:51.252459 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:12:51.252470 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:12:51.252481 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:12:51.252492 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:12:51.252503 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:12:51.252513 | orchestrator | 2026-04-09 05:12:51.252562 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-09 05:12:51.252598 | orchestrator | Thursday 09 April 2026 05:12:46 +0000 (0:00:02.312) 0:01:48.735 ******** 2026-04-09 05:12:51.252612 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:12:51.252625 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:12:51.252638 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:12:51.252650 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:12:51.252664 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:12:51.252676 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:12:51.252687 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:12:51.252698 | orchestrator | 2026-04-09 05:12:51.252709 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-09 05:12:51.252721 | orchestrator | Thursday 09 April 2026 05:12:48 +0000 (0:00:02.030) 0:01:50.766 ******** 2026-04-09 05:12:51.252732 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:12:51.252744 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:12:51.252755 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:12:51.252766 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:12:51.252805 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:12:51.252817 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:12:51.252828 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:12:51.252839 | orchestrator | 2026-04-09 05:12:51.252850 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-09 05:12:51.252861 | orchestrator | Thursday 09 April 2026 05:12:51 +0000 (0:00:02.155) 0:01:52.921 ******** 2026-04-09 05:12:51.252875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.252891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.252903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.252939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-17-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-09 05:12:51.252954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.252965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.252977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.253000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '78f51fbd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part16', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part14', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part15', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part1', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 05:12:51.253024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.253044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.484108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.484235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.484269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.484305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-09 05:12:51.484319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.484328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.484338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.484372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '482e14db', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part16', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part14', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part15', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part1', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 05:12:51.484398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.484407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.484418 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:12:51.484429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.484438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.484447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.484456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-14-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-09 05:12:51.484474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.739216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.739349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.739416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'dc1c8a18', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part16', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part14', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part15', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part1', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 05:12:51.739436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.739448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.739460 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:12:51.739494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.739509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1db77c01--2d77--5e1e--8d0a--4e535706b141-osd--block--1db77c01--2d77--5e1e--8d0a--4e535706b141', 'dm-uuid-LVM-Vd5rxgKUs73TU4Fbsf9r5IJx2JqOBc0dUhLn892OVFegRXfoAocTUnq1hUBwAqbQ'], 'uuids': ['9adc5058-59dc-41de-adf6-afc54c646e02'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '162ed735', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['UhLn89-2OVF-egRX-foAo-cTUn-q1hU-BwAqbQ']}})  2026-04-09 05:12:51.739591 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be', 'scsi-SQEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5d5b0f3e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 05:12:51.739606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-e4gHGq-azk6-pcuI-7Nw2-ZeJR-MqdR-RoIdn8', 'scsi-0QEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761', 'scsi-SQEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f5862b72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2f59a7c8--f88e--51a3--9620--37640e0ff9b5-osd--block--2f59a7c8--f88e--51a3--9620--37640e0ff9b5']}})  2026-04-09 05:12:51.739619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.739631 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.739642 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:12:51.739655 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-11-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-09 05:12:51.739678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.889497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-YYkPY1-YV0V-Afba-Bksc-BIap-C0b1-XJToPw', 'dm-uuid-CRYPT-LUKS2-34a00b1693eb41a48240b70c6fb1290d-YYkPY1-YV0V-Afba-Bksc-BIap-C0b1-XJToPw'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-09 05:12:51.889789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.889814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2f59a7c8--f88e--51a3--9620--37640e0ff9b5-osd--block--2f59a7c8--f88e--51a3--9620--37640e0ff9b5', 'dm-uuid-LVM-oNDzr1Rndp1i5vNhITRGHxSNPadq9yP2YYkPY1YV0VAfbaBkscBIapC0b1XJToPw'], 'uuids': ['34a00b16-93eb-41a4-8240-b70c6fb1290d'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f5862b72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['YYkPY1-YV0V-Afba-Bksc-BIap-C0b1-XJToPw']}})  2026-04-09 05:12:51.889829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0XRV4m-9HLY-z9SL-9jLS-42la-BMLC-ZGW5lb', 'scsi-0QEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad', 'scsi-SQEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '162ed735', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1db77c01--2d77--5e1e--8d0a--4e535706b141-osd--block--1db77c01--2d77--5e1e--8d0a--4e535706b141']}})  2026-04-09 05:12:51.889842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.889922 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0bd1f840', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part16', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part14', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part15', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part1', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 05:12:51.889947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.889963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.889978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-UhLn89-2OVF-egRX-foAo-cTUn-q1hU-BwAqbQ', 'dm-uuid-CRYPT-LUKS2-9adc505859dc41deadf6afc54c646e02-UhLn89-2OVF-egRX-foAo-cTUn-q1hU-BwAqbQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-09 05:12:51.889992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:51.890006 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9961abb4--5e3b--57c6--b852--cf206941d3b6-osd--block--9961abb4--5e3b--57c6--b852--cf206941d3b6', 'dm-uuid-LVM-VSFf3ccEgp0wKL5jhz806Y1ipsT5ObrjT6G1Wcak0n2a6HcA1XF9dc3OSaf3QiXy'], 'uuids': ['c5a762f6-19fc-430f-b395-3c5066cc9fcd'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ecc4ee99', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['T6G1Wc-ak0n-2a6H-cA1X-F9dc-3OSa-f3QiXy']}})  2026-04-09 05:12:51.890124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105', 'scsi-SQEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '60e6f74a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 05:12:52.053114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-HoNasn-5QxW-KcVM-fZxm-N33I-s5zp-OiEUUv', 'scsi-0QEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74', 'scsi-SQEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91609add', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--68e90870--4763--57e7--8e76--63c40a6d6d6f-osd--block--68e90870--4763--57e7--8e76--63c40a6d6d6f']}})  2026-04-09 05:12:52.053241 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:52.053271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:52.053293 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-09 05:12:52.053314 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:52.053335 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:12:52.053352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-KSROHC-dR1c-0EeK-RnCg-2dSk-lYcb-Dl2pDV', 'dm-uuid-CRYPT-LUKS2-952a49d36c2646fe9329a26e5adefe63-KSROHC-dR1c-0EeK-RnCg-2dSk-lYcb-Dl2pDV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-09 05:12:52.053364 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:52.053422 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--68e90870--4763--57e7--8e76--63c40a6d6d6f-osd--block--68e90870--4763--57e7--8e76--63c40a6d6d6f', 'dm-uuid-LVM-T0ejETzGFepEIpL08MD1OeW49dsnvI7wKSROHCdR1c0EeKRnCg2dSklYcbDl2pDV'], 'uuids': ['952a49d3-6c26-46fe-9329-a26e5adefe63'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '91609add', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['KSROHC-dR1c-0EeK-RnCg-2dSk-lYcb-Dl2pDV']}})  2026-04-09 05:12:52.053444 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Q1Gc3L-FdZu-zuWJ-nsAv-cjJy-ZUop-F84tN5', 'scsi-0QEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf', 'scsi-SQEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ecc4ee99', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9961abb4--5e3b--57c6--b852--cf206941d3b6-osd--block--9961abb4--5e3b--57c6--b852--cf206941d3b6']}})  2026-04-09 05:12:52.053457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:52.053468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:52.053492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9009f97f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part16', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part14', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part15', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part1', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 05:12:52.223996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:52.224118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:52.224138 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-T6G1Wc-ak0n-2a6H-cA1X-F9dc-3OSa-f3QiXy', 'dm-uuid-CRYPT-LUKS2-c5a762f619fc430fb3953c5066cc9fcd-T6G1Wc-ak0n-2a6H-cA1X-F9dc-3OSa-f3QiXy'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-09 05:12:52.224154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6-osd--block--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6', 'dm-uuid-LVM-9vxFUtTAtak5ZkuWrKKfGIkUHDKTWX7JDm3eA8Y5r9JAYPgNze2tzfH9lV9upfLN'], 'uuids': ['0d8306b6-b8d9-4741-84fa-e650942907f5'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '82469e2d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Dm3eA8-Y5r9-JAYP-gNze-2tzf-H9lV-9upfLN']}})  2026-04-09 05:12:52.224168 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d', 'scsi-SQEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e55aa834', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 05:12:52.224181 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-mluLyS-UGtI-41vG-BPyx-ooVb-U8x0-Mwltl0', 'scsi-0QEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e', 'scsi-SQEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1aa61eee', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--27c4b53f--c2bf--5253--84b2--9319684e0f9e-osd--block--27c4b53f--c2bf--5253--84b2--9319684e0f9e']}})  2026-04-09 05:12:52.224214 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:52.224228 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:12:52.224259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:52.224277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-09 05:12:52.224290 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:52.224309 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-IBYDKl-rKL1-8ENK-M1lO-FBMz-Nh1V-9LMTjE', 'dm-uuid-CRYPT-LUKS2-a0c575bd231a435faa33ebc924c5d720-IBYDKl-rKL1-8ENK-M1lO-FBMz-Nh1V-9LMTjE'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-09 05:12:52.224329 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:52.224349 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--27c4b53f--c2bf--5253--84b2--9319684e0f9e-osd--block--27c4b53f--c2bf--5253--84b2--9319684e0f9e', 'dm-uuid-LVM-Pc55YnmQ0VktCZR8gyBjVYB68xFeQ7SyIBYDKlrKL18ENKM1lOFBMzNh1V9LMTjE'], 'uuids': ['a0c575bd-231a-435f-aa33-ebc924c5d720'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1aa61eee', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['IBYDKl-rKL1-8ENK-M1lO-FBMz-Nh1V-9LMTjE']}})  2026-04-09 05:12:52.224380 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-BrMDw9-eTd2-RE46-4U0W-jzmp-yuNA-1uTxr3', 'scsi-0QEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4', 'scsi-SQEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '82469e2d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6-osd--block--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6']}})  2026-04-09 05:12:52.224412 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:52.300821 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '53e4edfb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part16', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part14', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part15', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part1', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 05:12:52.301775 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:52.301838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:52.301852 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:52.301864 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:52.301898 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:52.301920 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-09 05:12:52.301934 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Dm3eA8-Y5r9-JAYP-gNze-2tzf-H9lV-9upfLN', 'dm-uuid-CRYPT-LUKS2-0d8306b6b8d9474184fae650942907f5-Dm3eA8-Y5r9-JAYP-gNze-2tzf-H9lV-9upfLN'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-09 05:12:52.301946 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:52.301958 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:12:52.301971 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:52.301982 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:52.302077 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a5bdaf30-515d-4ec5-b4e0-017d8e5d901e', 'scsi-SQEMU_QEMU_HARDDISK_a5bdaf30-515d-4ec5-b4e0-017d8e5d901e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a5bdaf30', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a5bdaf30-515d-4ec5-b4e0-017d8e5d901e-part16', 'scsi-SQEMU_QEMU_HARDDISK_a5bdaf30-515d-4ec5-b4e0-017d8e5d901e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a5bdaf30-515d-4ec5-b4e0-017d8e5d901e-part14', 'scsi-SQEMU_QEMU_HARDDISK_a5bdaf30-515d-4ec5-b4e0-017d8e5d901e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a5bdaf30-515d-4ec5-b4e0-017d8e5d901e-part15', 'scsi-SQEMU_QEMU_HARDDISK_a5bdaf30-515d-4ec5-b4e0-017d8e5d901e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a5bdaf30-515d-4ec5-b4e0-017d8e5d901e-part1', 'scsi-SQEMU_QEMU_HARDDISK_a5bdaf30-515d-4ec5-b4e0-017d8e5d901e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 05:12:53.932595 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:53.932704 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:12:53.932731 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:12:53.932755 | orchestrator | 2026-04-09 05:12:53.932769 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-09 05:12:53.932783 | orchestrator | Thursday 09 April 2026 05:12:53 +0000 (0:00:02.432) 0:01:55.354 ******** 2026-04-09 05:12:53.932797 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:53.932835 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:53.932847 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:53.932860 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-17-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:53.932905 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:53.932919 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:53.932930 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:53.932954 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '78f51fbd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part16', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part14', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part15', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part1', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:53.932981 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.069321 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.069432 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.069481 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.069500 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.069575 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.069594 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.069634 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.069653 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.069786 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '482e14db', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part16', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part14', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part15', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part1', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.069821 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.069853 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.441592 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:12:54.441717 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.441744 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.441763 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.441783 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-14-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.441823 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.441843 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.441907 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.441931 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'dc1c8a18', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part16', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part14', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part15', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part1', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.441961 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.441981 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.442015 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:12:54.442134 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.626475 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1db77c01--2d77--5e1e--8d0a--4e535706b141-osd--block--1db77c01--2d77--5e1e--8d0a--4e535706b141', 'dm-uuid-LVM-Vd5rxgKUs73TU4Fbsf9r5IJx2JqOBc0dUhLn892OVFegRXfoAocTUnq1hUBwAqbQ'], 'uuids': ['9adc5058-59dc-41de-adf6-afc54c646e02'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '162ed735', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['UhLn89-2OVF-egRX-foAo-cTUn-q1hU-BwAqbQ']}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.626614 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be', 'scsi-SQEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5d5b0f3e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.626648 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-e4gHGq-azk6-pcuI-7Nw2-ZeJR-MqdR-RoIdn8', 'scsi-0QEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761', 'scsi-SQEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f5862b72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2f59a7c8--f88e--51a3--9620--37640e0ff9b5-osd--block--2f59a7c8--f88e--51a3--9620--37640e0ff9b5']}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.626665 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.626698 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.626711 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:12:54.626742 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-11-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.626756 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.626768 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-YYkPY1-YV0V-Afba-Bksc-BIap-C0b1-XJToPw', 'dm-uuid-CRYPT-LUKS2-34a00b1693eb41a48240b70c6fb1290d-YYkPY1-YV0V-Afba-Bksc-BIap-C0b1-XJToPw'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.626785 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.626797 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2f59a7c8--f88e--51a3--9620--37640e0ff9b5-osd--block--2f59a7c8--f88e--51a3--9620--37640e0ff9b5', 'dm-uuid-LVM-oNDzr1Rndp1i5vNhITRGHxSNPadq9yP2YYkPY1YV0VAfbaBkscBIapC0b1XJToPw'], 'uuids': ['34a00b16-93eb-41a4-8240-b70c6fb1290d'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f5862b72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['YYkPY1-YV0V-Afba-Bksc-BIap-C0b1-XJToPw']}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.626825 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0XRV4m-9HLY-z9SL-9jLS-42la-BMLC-ZGW5lb', 'scsi-0QEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad', 'scsi-SQEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '162ed735', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1db77c01--2d77--5e1e--8d0a--4e535706b141-osd--block--1db77c01--2d77--5e1e--8d0a--4e535706b141']}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.686494 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.686629 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0bd1f840', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part16', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part14', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part15', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part1', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.686670 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.686699 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.686713 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9961abb4--5e3b--57c6--b852--cf206941d3b6-osd--block--9961abb4--5e3b--57c6--b852--cf206941d3b6', 'dm-uuid-LVM-VSFf3ccEgp0wKL5jhz806Y1ipsT5ObrjT6G1Wcak0n2a6HcA1XF9dc3OSaf3QiXy'], 'uuids': ['c5a762f6-19fc-430f-b395-3c5066cc9fcd'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ecc4ee99', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['T6G1Wc-ak0n-2a6H-cA1X-F9dc-3OSa-f3QiXy']}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.686725 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.686743 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105', 'scsi-SQEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '60e6f74a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.686771 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-UhLn89-2OVF-egRX-foAo-cTUn-q1hU-BwAqbQ', 'dm-uuid-CRYPT-LUKS2-9adc505859dc41deadf6afc54c646e02-UhLn89-2OVF-egRX-foAo-cTUn-q1hU-BwAqbQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.686789 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-HoNasn-5QxW-KcVM-fZxm-N33I-s5zp-OiEUUv', 'scsi-0QEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74', 'scsi-SQEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91609add', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--68e90870--4763--57e7--8e76--63c40a6d6d6f-osd--block--68e90870--4763--57e7--8e76--63c40a6d6d6f']}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.886136 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.886229 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.886248 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.886280 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.886319 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:12:54.886334 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-KSROHC-dR1c-0EeK-RnCg-2dSk-lYcb-Dl2pDV', 'dm-uuid-CRYPT-LUKS2-952a49d36c2646fe9329a26e5adefe63-KSROHC-dR1c-0EeK-RnCg-2dSk-lYcb-Dl2pDV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.886349 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.886385 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--68e90870--4763--57e7--8e76--63c40a6d6d6f-osd--block--68e90870--4763--57e7--8e76--63c40a6d6d6f', 'dm-uuid-LVM-T0ejETzGFepEIpL08MD1OeW49dsnvI7wKSROHCdR1c0EeKRnCg2dSklYcbDl2pDV'], 'uuids': ['952a49d3-6c26-46fe-9329-a26e5adefe63'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '91609add', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['KSROHC-dR1c-0EeK-RnCg-2dSk-lYcb-Dl2pDV']}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.886399 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Q1Gc3L-FdZu-zuWJ-nsAv-cjJy-ZUop-F84tN5', 'scsi-0QEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf', 'scsi-SQEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ecc4ee99', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9961abb4--5e3b--57c6--b852--cf206941d3b6-osd--block--9961abb4--5e3b--57c6--b852--cf206941d3b6']}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.886422 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:54.886461 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9009f97f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part16', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part14', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part15', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part1', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:55.014654 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:55.014762 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:55.014818 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:55.014833 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-T6G1Wc-ak0n-2a6H-cA1X-F9dc-3OSa-f3QiXy', 'dm-uuid-CRYPT-LUKS2-c5a762f619fc430fb3953c5066cc9fcd-T6G1Wc-ak0n-2a6H-cA1X-F9dc-3OSa-f3QiXy'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:55.014846 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:12:55.014860 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6-osd--block--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6', 'dm-uuid-LVM-9vxFUtTAtak5ZkuWrKKfGIkUHDKTWX7JDm3eA8Y5r9JAYPgNze2tzfH9lV9upfLN'], 'uuids': ['0d8306b6-b8d9-4741-84fa-e650942907f5'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '82469e2d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Dm3eA8-Y5r9-JAYP-gNze-2tzf-H9lV-9upfLN']}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:55.014892 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d', 'scsi-SQEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e55aa834', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:55.014906 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-mluLyS-UGtI-41vG-BPyx-ooVb-U8x0-Mwltl0', 'scsi-0QEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e', 'scsi-SQEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1aa61eee', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--27c4b53f--c2bf--5253--84b2--9319684e0f9e-osd--block--27c4b53f--c2bf--5253--84b2--9319684e0f9e']}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:55.014935 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:55.014947 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:55.014960 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:55.014971 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:55.014993 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:55.075573 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:55.075792 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:55.075812 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-IBYDKl-rKL1-8ENK-M1lO-FBMz-Nh1V-9LMTjE', 'dm-uuid-CRYPT-LUKS2-a0c575bd231a435faa33ebc924c5d720-IBYDKl-rKL1-8ENK-M1lO-FBMz-Nh1V-9LMTjE'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:55.075824 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:55.075836 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:55.075848 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:55.075880 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:55.075907 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--27c4b53f--c2bf--5253--84b2--9319684e0f9e-osd--block--27c4b53f--c2bf--5253--84b2--9319684e0f9e', 'dm-uuid-LVM-Pc55YnmQ0VktCZR8gyBjVYB68xFeQ7SyIBYDKlrKL18ENKM1lOFBMzNh1V9LMTjE'], 'uuids': ['a0c575bd-231a-435f-aa33-ebc924c5d720'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1aa61eee', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['IBYDKl-rKL1-8ENK-M1lO-FBMz-Nh1V-9LMTjE']}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:55.075920 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:12:55.075944 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a5bdaf30-515d-4ec5-b4e0-017d8e5d901e', 'scsi-SQEMU_QEMU_HARDDISK_a5bdaf30-515d-4ec5-b4e0-017d8e5d901e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a5bdaf30', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a5bdaf30-515d-4ec5-b4e0-017d8e5d901e-part16', 'scsi-SQEMU_QEMU_HARDDISK_a5bdaf30-515d-4ec5-b4e0-017d8e5d901e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a5bdaf30-515d-4ec5-b4e0-017d8e5d901e-part14', 'scsi-SQEMU_QEMU_HARDDISK_a5bdaf30-515d-4ec5-b4e0-017d8e5d901e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a5bdaf30-515d-4ec5-b4e0-017d8e5d901e-part15', 'scsi-SQEMU_QEMU_HARDDISK_a5bdaf30-515d-4ec5-b4e0-017d8e5d901e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a5bdaf30-515d-4ec5-b4e0-017d8e5d901e-part1', 'scsi-SQEMU_QEMU_HARDDISK_a5bdaf30-515d-4ec5-b4e0-017d8e5d901e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:13:01.848023 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-BrMDw9-eTd2-RE46-4U0W-jzmp-yuNA-1uTxr3', 'scsi-0QEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4', 'scsi-SQEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '82469e2d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6-osd--block--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6']}}, 'ansible_loop_var': 'item'})  2026-04-09 05:13:01.848241 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:13:01.848261 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:13:01.848273 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:13:01.848289 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:13:01.848347 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '53e4edfb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part16', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part14', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part15', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part1', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:13:01.848384 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:13:01.848395 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:13:01.848407 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Dm3eA8-Y5r9-JAYP-gNze-2tzf-H9lV-9upfLN', 'dm-uuid-CRYPT-LUKS2-0d8306b6b8d9474184fae650942907f5-Dm3eA8-Y5r9-JAYP-gNze-2tzf-H9lV-9upfLN'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:13:01.848417 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:13:01.848428 | orchestrator | 2026-04-09 05:13:01.848439 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-09 05:13:01.848450 | orchestrator | Thursday 09 April 2026 05:12:56 +0000 (0:00:02.892) 0:01:58.248 ******** 2026-04-09 05:13:01.848460 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:13:01.848470 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:13:01.848517 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:13:01.848532 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:13:01.848544 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:13:01.848562 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:13:01.848575 | orchestrator | ok: [testbed-manager] 2026-04-09 05:13:01.848587 | orchestrator | 2026-04-09 05:13:01.848599 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-09 05:13:01.848611 | orchestrator | Thursday 09 April 2026 05:12:59 +0000 (0:00:02.685) 0:02:00.933 ******** 2026-04-09 05:13:01.848623 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:13:01.848634 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:13:01.848646 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:13:01.848657 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:13:01.848668 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:13:01.848680 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:13:01.848690 | orchestrator | ok: [testbed-manager] 2026-04-09 05:13:01.848704 | orchestrator | 2026-04-09 05:13:01.848721 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 05:13:01.848740 | orchestrator | Thursday 09 April 2026 05:13:01 +0000 (0:00:01.974) 0:02:02.908 ******** 2026-04-09 05:13:01.848757 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:13:01.848782 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:13:32.032266 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:13:32.032441 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:13:32.032471 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:13:32.032491 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:13:32.032509 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:13:32.032528 | orchestrator | 2026-04-09 05:13:32.032549 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 05:13:32.032571 | orchestrator | Thursday 09 April 2026 05:13:03 +0000 (0:00:02.494) 0:02:05.402 ******** 2026-04-09 05:13:32.032590 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:13:32.032608 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:13:32.032629 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:13:32.032647 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:13:32.032667 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:13:32.032686 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:13:32.032706 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:13:32.032724 | orchestrator | 2026-04-09 05:13:32.032743 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 05:13:32.032764 | orchestrator | Thursday 09 April 2026 05:13:05 +0000 (0:00:01.955) 0:02:07.357 ******** 2026-04-09 05:13:32.032808 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:13:32.032829 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:13:32.032849 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:13:32.032871 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:13:32.032893 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:13:32.032913 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:13:32.032934 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-04-09 05:13:32.032956 | orchestrator | 2026-04-09 05:13:32.032976 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 05:13:32.032994 | orchestrator | Thursday 09 April 2026 05:13:08 +0000 (0:00:02.656) 0:02:10.014 ******** 2026-04-09 05:13:32.033016 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:13:32.033036 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:13:32.033059 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:13:32.033082 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:13:32.033102 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:13:32.033122 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:13:32.033144 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:13:32.033162 | orchestrator | 2026-04-09 05:13:32.033181 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 05:13:32.033199 | orchestrator | Thursday 09 April 2026 05:13:10 +0000 (0:00:02.019) 0:02:12.034 ******** 2026-04-09 05:13:32.033219 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 05:13:32.033239 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-09 05:13:32.033288 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-09 05:13:32.033307 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-09 05:13:32.033324 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-09 05:13:32.033335 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-09 05:13:32.033347 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-09 05:13:32.033358 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-09 05:13:32.033393 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-09 05:13:32.033404 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-09 05:13:32.033416 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-09 05:13:32.033426 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-09 05:13:32.033437 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-09 05:13:32.033448 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-09 05:13:32.033459 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-09 05:13:32.033470 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-09 05:13:32.033481 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-09 05:13:32.033491 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-09 05:13:32.033502 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-09 05:13:32.033513 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-09 05:13:32.033524 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-09 05:13:32.033535 | orchestrator | 2026-04-09 05:13:32.033546 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 05:13:32.033557 | orchestrator | Thursday 09 April 2026 05:13:13 +0000 (0:00:03.231) 0:02:15.266 ******** 2026-04-09 05:13:32.033568 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 05:13:32.033580 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 05:13:32.033590 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 05:13:32.033601 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:13:32.033612 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-09 05:13:32.033623 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-09 05:13:32.033634 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-09 05:13:32.033645 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:13:32.033656 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-09 05:13:32.033666 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-09 05:13:32.033677 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-09 05:13:32.033688 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:13:32.033698 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-09 05:13:32.033709 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-09 05:13:32.033720 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-09 05:13:32.033731 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:13:32.033741 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-09 05:13:32.033752 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-09 05:13:32.033785 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-09 05:13:32.033798 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:13:32.033808 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-09 05:13:32.033819 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-09 05:13:32.033830 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-09 05:13:32.033841 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:13:32.033852 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-09 05:13:32.033872 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-09 05:13:32.033883 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-09 05:13:32.033894 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:13:32.033905 | orchestrator | 2026-04-09 05:13:32.033916 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-09 05:13:32.033927 | orchestrator | Thursday 09 April 2026 05:13:15 +0000 (0:00:02.284) 0:02:17.551 ******** 2026-04-09 05:13:32.033938 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:13:32.033949 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:13:32.033968 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:13:32.033980 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:13:32.033992 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 05:13:32.034003 | orchestrator | 2026-04-09 05:13:32.034015 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 05:13:32.034095 | orchestrator | Thursday 09 April 2026 05:13:17 +0000 (0:00:01.953) 0:02:19.505 ******** 2026-04-09 05:13:32.034106 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:13:32.034118 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:13:32.034129 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:13:32.034140 | orchestrator | 2026-04-09 05:13:32.034150 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 05:13:32.034161 | orchestrator | Thursday 09 April 2026 05:13:19 +0000 (0:00:01.621) 0:02:21.126 ******** 2026-04-09 05:13:32.034172 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:13:32.034183 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:13:32.034194 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:13:32.034205 | orchestrator | 2026-04-09 05:13:32.034216 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 05:13:32.034227 | orchestrator | Thursday 09 April 2026 05:13:20 +0000 (0:00:01.378) 0:02:22.505 ******** 2026-04-09 05:13:32.034238 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:13:32.034249 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:13:32.034260 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:13:32.034271 | orchestrator | 2026-04-09 05:13:32.034282 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 05:13:32.034293 | orchestrator | Thursday 09 April 2026 05:13:21 +0000 (0:00:01.364) 0:02:23.869 ******** 2026-04-09 05:13:32.034304 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:13:32.034315 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:13:32.034326 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:13:32.034337 | orchestrator | 2026-04-09 05:13:32.034348 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 05:13:32.034406 | orchestrator | Thursday 09 April 2026 05:13:23 +0000 (0:00:01.507) 0:02:25.377 ******** 2026-04-09 05:13:32.034419 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 05:13:32.034430 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 05:13:32.034441 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 05:13:32.034452 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:13:32.034463 | orchestrator | 2026-04-09 05:13:32.034474 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 05:13:32.034485 | orchestrator | Thursday 09 April 2026 05:13:24 +0000 (0:00:01.452) 0:02:26.829 ******** 2026-04-09 05:13:32.034495 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 05:13:32.034506 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 05:13:32.034517 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 05:13:32.034528 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:13:32.034539 | orchestrator | 2026-04-09 05:13:32.034550 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 05:13:32.034569 | orchestrator | Thursday 09 April 2026 05:13:26 +0000 (0:00:01.741) 0:02:28.571 ******** 2026-04-09 05:13:32.034580 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 05:13:32.034591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 05:13:32.034602 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 05:13:32.034613 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:13:32.034624 | orchestrator | 2026-04-09 05:13:32.034635 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 05:13:32.034646 | orchestrator | Thursday 09 April 2026 05:13:28 +0000 (0:00:01.696) 0:02:30.267 ******** 2026-04-09 05:13:32.034657 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:13:32.034668 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:13:32.034679 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:13:32.034690 | orchestrator | 2026-04-09 05:13:32.034700 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 05:13:32.034711 | orchestrator | Thursday 09 April 2026 05:13:30 +0000 (0:00:01.753) 0:02:32.020 ******** 2026-04-09 05:13:32.034722 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-09 05:13:32.034733 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-09 05:13:32.034744 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-09 05:13:32.034754 | orchestrator | 2026-04-09 05:13:32.034765 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-09 05:13:32.034776 | orchestrator | Thursday 09 April 2026 05:13:31 +0000 (0:00:01.579) 0:02:33.599 ******** 2026-04-09 05:13:32.034787 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 05:13:32.034808 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:14:21.129719 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:14:21.129835 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 05:14:21.129851 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 05:14:21.129863 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 05:14:21.129874 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 05:14:21.129885 | orchestrator | 2026-04-09 05:14:21.129897 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-09 05:14:21.129909 | orchestrator | Thursday 09 April 2026 05:13:33 +0000 (0:00:01.894) 0:02:35.494 ******** 2026-04-09 05:14:21.129921 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 05:14:21.129948 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:14:21.129960 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:14:21.129971 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 05:14:21.129982 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 05:14:21.129992 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 05:14:21.130003 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 05:14:21.130014 | orchestrator | 2026-04-09 05:14:21.130083 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-04-09 05:14:21.130094 | orchestrator | Thursday 09 April 2026 05:13:36 +0000 (0:00:03.026) 0:02:38.521 ******** 2026-04-09 05:14:21.130105 | orchestrator | changed: [testbed-node-3] 2026-04-09 05:14:21.130118 | orchestrator | changed: [testbed-node-5] 2026-04-09 05:14:21.130129 | orchestrator | changed: [testbed-node-4] 2026-04-09 05:14:21.130141 | orchestrator | changed: [testbed-manager] 2026-04-09 05:14:21.130152 | orchestrator | changed: [testbed-node-2] 2026-04-09 05:14:21.130163 | orchestrator | changed: [testbed-node-1] 2026-04-09 05:14:21.130227 | orchestrator | changed: [testbed-node-0] 2026-04-09 05:14:21.130250 | orchestrator | 2026-04-09 05:14:21.130272 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-04-09 05:14:21.130292 | orchestrator | Thursday 09 April 2026 05:13:48 +0000 (0:00:11.469) 0:02:49.991 ******** 2026-04-09 05:14:21.130308 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:14:21.130321 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:14:21.130334 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:14:21.130346 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:21.130358 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:21.130370 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:21.130382 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:14:21.130394 | orchestrator | 2026-04-09 05:14:21.130406 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-04-09 05:14:21.130419 | orchestrator | Thursday 09 April 2026 05:13:50 +0000 (0:00:02.102) 0:02:52.093 ******** 2026-04-09 05:14:21.130431 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:14:21.130445 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:14:21.130457 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:14:21.130469 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:21.130481 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:21.130494 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:21.130507 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:14:21.130518 | orchestrator | 2026-04-09 05:14:21.130529 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-04-09 05:14:21.130540 | orchestrator | Thursday 09 April 2026 05:13:52 +0000 (0:00:01.906) 0:02:53.999 ******** 2026-04-09 05:14:21.130550 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:14:21.130561 | orchestrator | changed: [testbed-node-0] 2026-04-09 05:14:21.130572 | orchestrator | changed: [testbed-node-1] 2026-04-09 05:14:21.130582 | orchestrator | changed: [testbed-node-2] 2026-04-09 05:14:21.130594 | orchestrator | changed: [testbed-node-3] 2026-04-09 05:14:21.130605 | orchestrator | changed: [testbed-node-4] 2026-04-09 05:14:21.130615 | orchestrator | changed: [testbed-node-5] 2026-04-09 05:14:21.130626 | orchestrator | 2026-04-09 05:14:21.130637 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-04-09 05:14:21.130648 | orchestrator | Thursday 09 April 2026 05:13:55 +0000 (0:00:03.112) 0:02:57.112 ******** 2026-04-09 05:14:21.130660 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-09 05:14:21.130672 | orchestrator | 2026-04-09 05:14:21.130683 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-04-09 05:14:21.130694 | orchestrator | Thursday 09 April 2026 05:13:58 +0000 (0:00:02.994) 0:03:00.107 ******** 2026-04-09 05:14:21.130705 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:14:21.130716 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:14:21.130727 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:14:21.130737 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:21.130748 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:21.130759 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:21.130769 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:14:21.130780 | orchestrator | 2026-04-09 05:14:21.130791 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-04-09 05:14:21.130802 | orchestrator | Thursday 09 April 2026 05:14:00 +0000 (0:00:01.982) 0:03:02.090 ******** 2026-04-09 05:14:21.130813 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:14:21.130824 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:14:21.130834 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:14:21.130845 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:21.130856 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:21.130866 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:21.130904 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:14:21.130915 | orchestrator | 2026-04-09 05:14:21.130926 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-04-09 05:14:21.130937 | orchestrator | Thursday 09 April 2026 05:14:02 +0000 (0:00:02.188) 0:03:04.278 ******** 2026-04-09 05:14:21.130948 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:14:21.130959 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:14:21.130970 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:14:21.130981 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:21.130992 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:21.131003 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:21.131013 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:14:21.131024 | orchestrator | 2026-04-09 05:14:21.131035 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-04-09 05:14:21.131046 | orchestrator | Thursday 09 April 2026 05:14:04 +0000 (0:00:01.855) 0:03:06.134 ******** 2026-04-09 05:14:21.131057 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:14:21.131068 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:14:21.131085 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:14:21.131097 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:21.131107 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:21.131118 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:21.131129 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:14:21.131140 | orchestrator | 2026-04-09 05:14:21.131151 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-04-09 05:14:21.131162 | orchestrator | Thursday 09 April 2026 05:14:06 +0000 (0:00:02.126) 0:03:08.261 ******** 2026-04-09 05:14:21.131173 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:14:21.131204 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:14:21.131217 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:14:21.131228 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:21.131239 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:21.131249 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:21.131260 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:14:21.131271 | orchestrator | 2026-04-09 05:14:21.131282 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-04-09 05:14:21.131293 | orchestrator | Thursday 09 April 2026 05:14:08 +0000 (0:00:02.041) 0:03:10.302 ******** 2026-04-09 05:14:21.131303 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:14:21.131314 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:14:21.131325 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:14:21.131335 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:21.131346 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:21.131357 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:21.131367 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:14:21.131378 | orchestrator | 2026-04-09 05:14:21.131388 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-04-09 05:14:21.131399 | orchestrator | Thursday 09 April 2026 05:14:10 +0000 (0:00:02.186) 0:03:12.489 ******** 2026-04-09 05:14:21.131410 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:14:21.131421 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:14:21.131431 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:14:21.131442 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:21.131452 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:21.131463 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:21.131473 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:14:21.131484 | orchestrator | 2026-04-09 05:14:21.131495 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-04-09 05:14:21.131506 | orchestrator | Thursday 09 April 2026 05:14:12 +0000 (0:00:01.919) 0:03:14.409 ******** 2026-04-09 05:14:21.131517 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:14:21.131527 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:14:21.131538 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:14:21.131556 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:21.131567 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:21.131578 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:21.131589 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:14:21.131599 | orchestrator | 2026-04-09 05:14:21.131610 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-04-09 05:14:21.131621 | orchestrator | Thursday 09 April 2026 05:14:14 +0000 (0:00:02.207) 0:03:16.616 ******** 2026-04-09 05:14:21.131632 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:14:21.131643 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:14:21.131653 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:14:21.131664 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:21.131674 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:21.131685 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:21.131696 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:14:21.131706 | orchestrator | 2026-04-09 05:14:21.131717 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-04-09 05:14:21.131728 | orchestrator | Thursday 09 April 2026 05:14:16 +0000 (0:00:02.134) 0:03:18.751 ******** 2026-04-09 05:14:21.131739 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:14:21.131749 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:14:21.131760 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:14:21.131771 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:21.131781 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:21.131792 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:21.131802 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:14:21.131813 | orchestrator | 2026-04-09 05:14:21.131824 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-04-09 05:14:21.131835 | orchestrator | Thursday 09 April 2026 05:14:18 +0000 (0:00:01.938) 0:03:20.690 ******** 2026-04-09 05:14:21.131846 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:14:21.131856 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:14:21.131867 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:14:21.131878 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:21.131888 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:21.131899 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:21.131909 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:14:21.131920 | orchestrator | 2026-04-09 05:14:21.131931 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-04-09 05:14:21.131942 | orchestrator | Thursday 09 April 2026 05:14:20 +0000 (0:00:02.158) 0:03:22.848 ******** 2026-04-09 05:14:21.131959 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:14:43.711006 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:14:43.711177 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:14:43.711196 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:43.711208 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:43.711219 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:43.711230 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:14:43.711242 | orchestrator | 2026-04-09 05:14:43.711255 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-04-09 05:14:43.711267 | orchestrator | Thursday 09 April 2026 05:14:22 +0000 (0:00:01.978) 0:03:24.827 ******** 2026-04-09 05:14:43.711278 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:14:43.711290 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:14:43.711301 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:14:43.711313 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'})  2026-04-09 05:14:43.711341 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'})  2026-04-09 05:14:43.711353 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:43.711385 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'})  2026-04-09 05:14:43.711397 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'})  2026-04-09 05:14:43.711408 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:43.711419 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'})  2026-04-09 05:14:43.711430 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'})  2026-04-09 05:14:43.711441 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:43.711452 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:14:43.711462 | orchestrator | 2026-04-09 05:14:43.711474 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-04-09 05:14:43.711484 | orchestrator | Thursday 09 April 2026 05:14:25 +0000 (0:00:02.221) 0:03:27.048 ******** 2026-04-09 05:14:43.711495 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:14:43.711506 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:14:43.711517 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:14:43.711528 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:43.711538 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:43.711549 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:43.711560 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:14:43.711571 | orchestrator | 2026-04-09 05:14:43.711582 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-04-09 05:14:43.711593 | orchestrator | Thursday 09 April 2026 05:14:27 +0000 (0:00:01.919) 0:03:28.968 ******** 2026-04-09 05:14:43.711604 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:14:43.711614 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:14:43.711625 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:14:43.711636 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:43.711646 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:43.711657 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:43.711668 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:14:43.711678 | orchestrator | 2026-04-09 05:14:43.711689 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-04-09 05:14:43.711700 | orchestrator | Thursday 09 April 2026 05:14:29 +0000 (0:00:02.247) 0:03:31.216 ******** 2026-04-09 05:14:43.711710 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:14:43.711721 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:14:43.711732 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:14:43.711743 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:43.711754 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:43.711764 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:43.711775 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:14:43.711786 | orchestrator | 2026-04-09 05:14:43.711797 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-04-09 05:14:43.711807 | orchestrator | Thursday 09 April 2026 05:14:31 +0000 (0:00:02.042) 0:03:33.258 ******** 2026-04-09 05:14:43.711818 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:14:43.711829 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:14:43.711839 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:14:43.711850 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:43.711861 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:43.711872 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:43.711882 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:14:43.711893 | orchestrator | 2026-04-09 05:14:43.711904 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-04-09 05:14:43.711915 | orchestrator | Thursday 09 April 2026 05:14:33 +0000 (0:00:02.174) 0:03:35.432 ******** 2026-04-09 05:14:43.711933 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:14:43.711944 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:14:43.711954 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:14:43.711965 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:43.711975 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:43.711986 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:43.711997 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:14:43.712008 | orchestrator | 2026-04-09 05:14:43.712019 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-04-09 05:14:43.712030 | orchestrator | Thursday 09 April 2026 05:14:35 +0000 (0:00:02.196) 0:03:37.629 ******** 2026-04-09 05:14:43.712041 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:14:43.712051 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:14:43.712062 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:14:43.712091 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:43.712103 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:43.712134 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:43.712146 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:14:43.712157 | orchestrator | 2026-04-09 05:14:43.712167 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-04-09 05:14:43.712179 | orchestrator | Thursday 09 April 2026 05:14:37 +0000 (0:00:02.002) 0:03:39.632 ******** 2026-04-09 05:14:43.712190 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:14:43.712200 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:14:43.712211 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:14:43.712222 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:14:43.712234 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 05:14:43.712245 | orchestrator | 2026-04-09 05:14:43.712256 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-04-09 05:14:43.712273 | orchestrator | Thursday 09 April 2026 05:14:40 +0000 (0:00:02.464) 0:03:42.097 ******** 2026-04-09 05:14:43.712285 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:14:43.712296 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:14:43.712307 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:14:43.712318 | orchestrator | 2026-04-09 05:14:43.712329 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-04-09 05:14:43.712340 | orchestrator | Thursday 09 April 2026 05:14:41 +0000 (0:00:01.404) 0:03:43.501 ******** 2026-04-09 05:14:43.712352 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'})  2026-04-09 05:14:43.712363 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'})  2026-04-09 05:14:43.712374 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:43.712385 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'})  2026-04-09 05:14:43.712396 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'})  2026-04-09 05:14:43.712407 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:43.712418 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'})  2026-04-09 05:14:43.712429 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'})  2026-04-09 05:14:43.712440 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:43.712451 | orchestrator | 2026-04-09 05:14:43.712462 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-04-09 05:14:43.712473 | orchestrator | Thursday 09 April 2026 05:14:43 +0000 (0:00:01.478) 0:03:44.980 ******** 2026-04-09 05:14:43.712494 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'}, 'ansible_loop_var': 'item'})  2026-04-09 05:14:43.712507 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'}, 'ansible_loop_var': 'item'})  2026-04-09 05:14:43.712518 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:43.712530 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'}, 'ansible_loop_var': 'item'})  2026-04-09 05:14:43.712541 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'}, 'ansible_loop_var': 'item'})  2026-04-09 05:14:43.712552 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:43.712564 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'}, 'ansible_loop_var': 'item'})  2026-04-09 05:14:43.712582 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'}, 'ansible_loop_var': 'item'})  2026-04-09 05:14:53.336387 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:53.336501 | orchestrator | 2026-04-09 05:14:53.336518 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-04-09 05:14:53.336531 | orchestrator | Thursday 09 April 2026 05:14:44 +0000 (0:00:01.705) 0:03:46.686 ******** 2026-04-09 05:14:53.336543 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:53.336555 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:53.336566 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:53.336577 | orchestrator | 2026-04-09 05:14:53.336589 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-04-09 05:14:53.336617 | orchestrator | Thursday 09 April 2026 05:14:46 +0000 (0:00:01.372) 0:03:48.058 ******** 2026-04-09 05:14:53.336629 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:53.336640 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:53.336652 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:53.336664 | orchestrator | 2026-04-09 05:14:53.336675 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-04-09 05:14:53.336686 | orchestrator | Thursday 09 April 2026 05:14:47 +0000 (0:00:01.408) 0:03:49.466 ******** 2026-04-09 05:14:53.336697 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:53.336709 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:53.336720 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:53.336731 | orchestrator | 2026-04-09 05:14:53.336742 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-04-09 05:14:53.336753 | orchestrator | Thursday 09 April 2026 05:14:49 +0000 (0:00:01.520) 0:03:50.987 ******** 2026-04-09 05:14:53.336764 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:53.336799 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:53.336810 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:53.336822 | orchestrator | 2026-04-09 05:14:53.336833 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-04-09 05:14:53.336844 | orchestrator | Thursday 09 April 2026 05:14:50 +0000 (0:00:01.421) 0:03:52.409 ******** 2026-04-09 05:14:53.336856 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'}) 2026-04-09 05:14:53.336869 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'}) 2026-04-09 05:14:53.336880 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'}) 2026-04-09 05:14:53.336891 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'}) 2026-04-09 05:14:53.336902 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'}) 2026-04-09 05:14:53.336913 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'}) 2026-04-09 05:14:53.336927 | orchestrator | 2026-04-09 05:14:53.336941 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-04-09 05:14:53.336955 | orchestrator | Thursday 09 April 2026 05:14:52 +0000 (0:00:02.393) 0:03:54.803 ******** 2026-04-09 05:14:53.336973 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5/osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1775703642.165412, 'mtime': 1775703642.1604118, 'ctime': 1775703642.1604118, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5/osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'}, 'ansible_loop_var': 'item'})  2026-04-09 05:14:53.337015 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141/osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1775703660.899705, 'mtime': 1775703660.893705, 'ctime': 1775703660.893705, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141/osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'}, 'ansible_loop_var': 'item'})  2026-04-09 05:14:53.337039 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:53.337054 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f/osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 950, 'dev': 6, 'nlink': 1, 'atime': 1775703642.8454444, 'mtime': 1775703642.8404443, 'ctime': 1775703642.8404443, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f/osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'}, 'ansible_loop_var': 'item'})  2026-04-09 05:14:53.337069 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6/osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 960, 'dev': 6, 'nlink': 1, 'atime': 1775703661.7567427, 'mtime': 1775703661.7477427, 'ctime': 1775703661.7477427, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6/osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'}, 'ansible_loop_var': 'item'})  2026-04-09 05:14:53.337120 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:53.337152 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e/osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1775703644.8143878, 'mtime': 1775703644.8043878, 'ctime': 1775703644.8043878, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e/osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'}, 'ansible_loop_var': 'item'})  2026-04-09 05:14:59.354649 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6/osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1775703665.677708, 'mtime': 1775703665.6727078, 'ctime': 1775703665.6727078, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6/osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'}, 'ansible_loop_var': 'item'})  2026-04-09 05:14:59.354742 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:59.354752 | orchestrator | 2026-04-09 05:14:59.354760 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-04-09 05:14:59.354768 | orchestrator | Thursday 09 April 2026 05:14:54 +0000 (0:00:01.536) 0:03:56.340 ******** 2026-04-09 05:14:59.354775 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'})  2026-04-09 05:14:59.354784 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'})  2026-04-09 05:14:59.354790 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:59.354797 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'})  2026-04-09 05:14:59.354803 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'})  2026-04-09 05:14:59.354809 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:59.354816 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'})  2026-04-09 05:14:59.354822 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'})  2026-04-09 05:14:59.354829 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:59.354835 | orchestrator | 2026-04-09 05:14:59.354842 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-04-09 05:14:59.354849 | orchestrator | Thursday 09 April 2026 05:14:55 +0000 (0:00:01.374) 0:03:57.715 ******** 2026-04-09 05:14:59.354857 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'}, 'ansible_loop_var': 'item'})  2026-04-09 05:14:59.354865 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'}, 'ansible_loop_var': 'item'})  2026-04-09 05:14:59.354890 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:59.354910 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'}, 'ansible_loop_var': 'item'})  2026-04-09 05:14:59.354930 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'}, 'ansible_loop_var': 'item'})  2026-04-09 05:14:59.354937 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:59.354943 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'}, 'ansible_loop_var': 'item'})  2026-04-09 05:14:59.354950 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'}, 'ansible_loop_var': 'item'})  2026-04-09 05:14:59.354956 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:59.354963 | orchestrator | 2026-04-09 05:14:59.354969 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-04-09 05:14:59.354975 | orchestrator | Thursday 09 April 2026 05:14:57 +0000 (0:00:01.528) 0:03:59.243 ******** 2026-04-09 05:14:59.354982 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'})  2026-04-09 05:14:59.354988 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'})  2026-04-09 05:14:59.354994 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:59.355001 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'})  2026-04-09 05:14:59.355007 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'})  2026-04-09 05:14:59.355013 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:59.355019 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'})  2026-04-09 05:14:59.355026 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'})  2026-04-09 05:14:59.355032 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:14:59.355038 | orchestrator | 2026-04-09 05:14:59.355045 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-04-09 05:14:59.355051 | orchestrator | Thursday 09 April 2026 05:14:58 +0000 (0:00:01.624) 0:04:00.867 ******** 2026-04-09 05:14:59.355101 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-2f59a7c8-f88e-51a3-9620-37640e0ff9b5', 'data_vg': 'ceph-2f59a7c8-f88e-51a3-9620-37640e0ff9b5'}, 'ansible_loop_var': 'item'})  2026-04-09 05:14:59.355114 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-1db77c01-2d77-5e1e-8d0a-4e535706b141', 'data_vg': 'ceph-1db77c01-2d77-5e1e-8d0a-4e535706b141'}, 'ansible_loop_var': 'item'})  2026-04-09 05:14:59.355121 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:14:59.355127 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-68e90870-4763-57e7-8e76-63c40a6d6d6f', 'data_vg': 'ceph-68e90870-4763-57e7-8e76-63c40a6d6d6f'}, 'ansible_loop_var': 'item'})  2026-04-09 05:14:59.355134 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-9961abb4-5e3b-57c6-b852-cf206941d3b6', 'data_vg': 'ceph-9961abb4-5e3b-57c6-b852-cf206941d3b6'}, 'ansible_loop_var': 'item'})  2026-04-09 05:14:59.355140 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:14:59.355150 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-27c4b53f-c2bf-5253-84b2-9319684e0f9e', 'data_vg': 'ceph-27c4b53f-c2bf-5253-84b2-9319684e0f9e'}, 'ansible_loop_var': 'item'})  2026-04-09 05:14:59.355160 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6', 'data_vg': 'ceph-07250cb7-fce6-51fa-be28-6bf5f5cf4ef6'}, 'ansible_loop_var': 'item'})  2026-04-09 05:15:09.651703 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:15:09.651826 | orchestrator | 2026-04-09 05:15:09.651854 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-04-09 05:15:09.651878 | orchestrator | Thursday 09 April 2026 05:15:00 +0000 (0:00:01.478) 0:04:02.346 ******** 2026-04-09 05:15:09.651898 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:15:09.651917 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:15:09.651935 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:15:09.651955 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:15:09.651974 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:15:09.651992 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:15:09.652011 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:15:09.652081 | orchestrator | 2026-04-09 05:15:09.652103 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-04-09 05:15:09.652123 | orchestrator | Thursday 09 April 2026 05:15:02 +0000 (0:00:01.942) 0:04:04.288 ******** 2026-04-09 05:15:09.652142 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:15:09.652161 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:15:09.652179 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:15:09.652198 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:15:09.652216 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 05:15:09.652236 | orchestrator | 2026-04-09 05:15:09.652255 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-04-09 05:15:09.652276 | orchestrator | Thursday 09 April 2026 05:15:04 +0000 (0:00:02.501) 0:04:06.790 ******** 2026-04-09 05:15:09.652296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652316 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652397 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652411 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:15:09.652425 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652458 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652480 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:15:09.652491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652502 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652545 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:15:09.652556 | orchestrator | 2026-04-09 05:15:09.652567 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-04-09 05:15:09.652578 | orchestrator | Thursday 09 April 2026 05:15:06 +0000 (0:00:01.407) 0:04:08.197 ******** 2026-04-09 05:15:09.652601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652613 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652676 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652688 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652710 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652728 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:15:09.652739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652750 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:15:09.652761 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652772 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652783 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652816 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:15:09.652827 | orchestrator | 2026-04-09 05:15:09.652838 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-04-09 05:15:09.652849 | orchestrator | Thursday 09 April 2026 05:15:07 +0000 (0:00:01.586) 0:04:09.783 ******** 2026-04-09 05:15:09.652860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652916 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:15:09.652927 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652949 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.652982 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:15:09.652993 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.653004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.653019 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.653055 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.653067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 05:15:09.653078 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:15:09.653096 | orchestrator | 2026-04-09 05:15:09.653107 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-04-09 05:15:09.653118 | orchestrator | Thursday 09 April 2026 05:15:09 +0000 (0:00:01.423) 0:04:11.207 ******** 2026-04-09 05:15:09.653130 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:15:09.653141 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:15:09.653159 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:15:24.377273 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:15:24.377416 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:15:24.377442 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:15:24.377462 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:15:24.377481 | orchestrator | 2026-04-09 05:15:24.377501 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-04-09 05:15:24.377521 | orchestrator | Thursday 09 April 2026 05:15:11 +0000 (0:00:01.792) 0:04:13.000 ******** 2026-04-09 05:15:24.377539 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:15:24.377558 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:15:24.377576 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:15:24.377594 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:15:24.377612 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:15:24.377630 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:15:24.377649 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:15:24.377668 | orchestrator | 2026-04-09 05:15:24.377688 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-04-09 05:15:24.377706 | orchestrator | Thursday 09 April 2026 05:15:13 +0000 (0:00:02.120) 0:04:15.120 ******** 2026-04-09 05:15:24.377725 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:15:24.377745 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:15:24.377763 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:15:24.377781 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:15:24.377799 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:15:24.377819 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:15:24.377837 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:15:24.377855 | orchestrator | 2026-04-09 05:15:24.377874 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-04-09 05:15:24.377895 | orchestrator | Thursday 09 April 2026 05:15:15 +0000 (0:00:02.114) 0:04:17.234 ******** 2026-04-09 05:15:24.377913 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:15:24.377931 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:15:24.377950 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:15:24.377969 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:15:24.378012 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:15:24.378111 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:15:24.378131 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:15:24.378150 | orchestrator | 2026-04-09 05:15:24.378169 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-04-09 05:15:24.378190 | orchestrator | Thursday 09 April 2026 05:15:17 +0000 (0:00:01.913) 0:04:19.148 ******** 2026-04-09 05:15:24.378210 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:15:24.378230 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:15:24.378248 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:15:24.378262 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:15:24.378273 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:15:24.378284 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:15:24.378295 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:15:24.378306 | orchestrator | 2026-04-09 05:15:24.378317 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-04-09 05:15:24.378328 | orchestrator | Thursday 09 April 2026 05:15:19 +0000 (0:00:02.093) 0:04:21.241 ******** 2026-04-09 05:15:24.378339 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:15:24.378350 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:15:24.378361 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:15:24.378398 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:15:24.378410 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:15:24.378422 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:15:24.378433 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:15:24.378443 | orchestrator | 2026-04-09 05:15:24.378454 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-04-09 05:15:24.378465 | orchestrator | Thursday 09 April 2026 05:15:21 +0000 (0:00:01.848) 0:04:23.090 ******** 2026-04-09 05:15:24.378476 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:15:24.378486 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:15:24.378497 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:15:24.378508 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:15:24.378519 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:15:24.378530 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:15:24.378540 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:15:24.378551 | orchestrator | 2026-04-09 05:15:24.378562 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-04-09 05:15:24.378573 | orchestrator | Thursday 09 April 2026 05:15:23 +0000 (0:00:02.235) 0:04:25.326 ******** 2026-04-09 05:15:24.378585 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-09 05:15:24.378598 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-09 05:15:24.378611 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-09 05:15:24.378638 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-09 05:15:24.378651 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-09 05:15:24.378664 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-09 05:15:24.378696 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-09 05:15:24.378708 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-09 05:15:24.378719 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-09 05:15:24.378730 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-09 05:15:24.378741 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-09 05:15:24.378752 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:15:24.378764 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-09 05:15:24.378774 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-09 05:15:24.378793 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-09 05:15:24.378804 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-09 05:15:24.378816 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-09 05:15:24.378827 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-09 05:15:24.378838 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-09 05:15:24.378849 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:15:24.378860 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-09 05:15:24.378871 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-09 05:15:24.378882 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:15:24.378893 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-09 05:15:24.378904 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-09 05:15:24.378915 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-09 05:15:24.378926 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-09 05:15:24.378943 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-09 05:15:24.378954 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-09 05:15:24.378965 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-09 05:15:24.378976 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:15:24.379027 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-09 05:15:28.794753 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-09 05:15:28.794856 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-09 05:15:28.794872 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-09 05:15:28.794909 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-09 05:15:28.794920 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:15:28.794933 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-09 05:15:28.794944 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-09 05:15:28.794956 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-09 05:15:28.795006 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-09 05:15:28.795018 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-09 05:15:28.795029 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-09 05:15:28.795040 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-09 05:15:28.795051 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:15:28.795062 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-09 05:15:28.795073 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-09 05:15:28.795084 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-09 05:15:28.795095 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:15:28.795106 | orchestrator | 2026-04-09 05:15:28.795118 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-04-09 05:15:28.795131 | orchestrator | Thursday 09 April 2026 05:15:25 +0000 (0:00:02.282) 0:04:27.609 ******** 2026-04-09 05:15:28.795141 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:15:28.795152 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:15:28.795163 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:15:28.795174 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:15:28.795184 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:15:28.795195 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:15:28.795205 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:15:28.795216 | orchestrator | 2026-04-09 05:15:28.795227 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-04-09 05:15:28.795238 | orchestrator | Thursday 09 April 2026 05:15:27 +0000 (0:00:02.193) 0:04:29.802 ******** 2026-04-09 05:15:28.795263 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-09 05:15:28.795275 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-09 05:15:28.795288 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-09 05:15:28.795309 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-09 05:15:28.795340 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-09 05:15:28.795354 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-09 05:15:28.795367 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:15:28.795380 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-09 05:15:28.795393 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-09 05:15:28.795406 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-09 05:15:28.795419 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-09 05:15:28.795431 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-09 05:15:28.795445 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-09 05:15:28.795457 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:15:28.795470 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-09 05:15:28.795482 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-09 05:15:28.795495 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-09 05:15:28.795507 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-09 05:15:28.795520 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-09 05:15:28.795534 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-09 05:15:28.795546 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:15:28.795559 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-09 05:15:28.795571 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-09 05:15:28.795583 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-09 05:15:28.795603 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-09 05:15:28.795622 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-09 05:15:28.795634 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-09 05:15:28.795645 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-09 05:15:28.795656 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:15:28.795675 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-09 05:15:57.577660 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-09 05:15:57.577773 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-09 05:15:57.577792 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-09 05:15:57.577806 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-09 05:15:57.577818 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-09 05:15:57.577829 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-09 05:15:57.577840 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-09 05:15:57.577852 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-09 05:15:57.577864 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-09 05:15:57.577878 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-09 05:15:57.577986 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-09 05:15:57.578001 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-09 05:15:57.578013 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-09 05:15:57.578087 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:15:57.578100 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-09 05:15:57.578137 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:15:57.578150 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-09 05:15:57.578171 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-09 05:15:57.578183 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:15:57.578195 | orchestrator | 2026-04-09 05:15:57.578209 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-04-09 05:15:57.578222 | orchestrator | Thursday 09 April 2026 05:15:30 +0000 (0:00:02.376) 0:04:32.179 ******** 2026-04-09 05:15:57.578235 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:15:57.578262 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:15:57.578275 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:15:57.578288 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:15:57.578300 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:15:57.578313 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:15:57.578325 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:15:57.578336 | orchestrator | 2026-04-09 05:15:57.578348 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-04-09 05:15:57.578359 | orchestrator | Thursday 09 April 2026 05:15:32 +0000 (0:00:02.224) 0:04:34.403 ******** 2026-04-09 05:15:57.578370 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:15:57.578381 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:15:57.578392 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:15:57.578402 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:15:57.578413 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:15:57.578424 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:15:57.578434 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:15:57.578445 | orchestrator | 2026-04-09 05:15:57.578456 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-04-09 05:15:57.578485 | orchestrator | Thursday 09 April 2026 05:15:34 +0000 (0:00:02.118) 0:04:36.522 ******** 2026-04-09 05:15:57.578497 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:15:57.578508 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:15:57.578519 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:15:57.578530 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:15:57.578540 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:15:57.578551 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:15:57.578562 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:15:57.578573 | orchestrator | 2026-04-09 05:15:57.578584 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-04-09 05:15:57.578595 | orchestrator | Thursday 09 April 2026 05:15:37 +0000 (0:00:02.373) 0:04:38.895 ******** 2026-04-09 05:15:57.578606 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-09 05:15:57.578634 | orchestrator | 2026-04-09 05:15:57.578646 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-04-09 05:15:57.578668 | orchestrator | Thursday 09 April 2026 05:15:39 +0000 (0:00:02.837) 0:04:41.733 ******** 2026-04-09 05:15:57.578679 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-09 05:15:57.578691 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-09 05:15:57.578702 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-09 05:15:57.578712 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-09 05:15:57.578733 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-09 05:15:57.578744 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-09 05:15:57.578755 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-09 05:15:57.578766 | orchestrator | 2026-04-09 05:15:57.578777 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-04-09 05:15:57.578787 | orchestrator | Thursday 09 April 2026 05:15:41 +0000 (0:00:02.132) 0:04:43.866 ******** 2026-04-09 05:15:57.578798 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:15:57.578809 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:15:57.578819 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:15:57.578830 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:15:57.578841 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:15:57.578852 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:15:57.578863 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:15:57.578874 | orchestrator | 2026-04-09 05:15:57.578908 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-04-09 05:15:57.578929 | orchestrator | Thursday 09 April 2026 05:15:44 +0000 (0:00:02.186) 0:04:46.052 ******** 2026-04-09 05:15:57.578948 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:15:57.578967 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:15:57.578985 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:15:57.578996 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:15:57.579007 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:15:57.579018 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:15:57.579028 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:15:57.579039 | orchestrator | 2026-04-09 05:15:57.579050 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-04-09 05:15:57.579061 | orchestrator | Thursday 09 April 2026 05:15:46 +0000 (0:00:02.104) 0:04:48.156 ******** 2026-04-09 05:15:57.579072 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:15:57.579083 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:15:57.579094 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:15:57.579104 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:15:57.579115 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:15:57.579126 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:15:57.579137 | orchestrator | ok: [testbed-manager] 2026-04-09 05:15:57.579148 | orchestrator | 2026-04-09 05:15:57.579159 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-04-09 05:15:57.579170 | orchestrator | Thursday 09 April 2026 05:15:48 +0000 (0:00:02.705) 0:04:50.862 ******** 2026-04-09 05:15:57.579181 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:15:57.579191 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:15:57.579202 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:15:57.579213 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:15:57.579224 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:15:57.579234 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:15:57.579245 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:15:57.579256 | orchestrator | 2026-04-09 05:15:57.579267 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-04-09 05:15:57.579285 | orchestrator | Thursday 09 April 2026 05:15:51 +0000 (0:00:02.453) 0:04:53.316 ******** 2026-04-09 05:15:57.579296 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:15:57.579307 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:15:57.579317 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:15:57.579328 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:15:57.579339 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:15:57.579349 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:15:57.579360 | orchestrator | skipping: [testbed-manager] 2026-04-09 05:15:57.579371 | orchestrator | 2026-04-09 05:15:57.579382 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-04-09 05:15:57.579400 | orchestrator | Thursday 09 April 2026 05:15:54 +0000 (0:00:02.566) 0:04:55.883 ******** 2026-04-09 05:15:57.579411 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:15:57.579422 | orchestrator | 2026-04-09 05:15:57.579433 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-04-09 05:15:57.579444 | orchestrator | Thursday 09 April 2026 05:15:56 +0000 (0:00:02.784) 0:04:58.667 ******** 2026-04-09 05:15:57.579454 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:15:57.579465 | orchestrator | 2026-04-09 05:15:57.579476 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-04-09 05:15:57.579487 | orchestrator | 2026-04-09 05:15:57.579506 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 05:16:37.088295 | orchestrator | Thursday 09 April 2026 05:15:58 +0000 (0:00:01.492) 0:05:00.160 ******** 2026-04-09 05:16:37.088410 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:16:37.088427 | orchestrator | 2026-04-09 05:16:37.088440 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 05:16:37.088453 | orchestrator | Thursday 09 April 2026 05:15:59 +0000 (0:00:01.422) 0:05:01.583 ******** 2026-04-09 05:16:37.088465 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:16:37.088476 | orchestrator | 2026-04-09 05:16:37.088488 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-04-09 05:16:37.088500 | orchestrator | Thursday 09 April 2026 05:16:00 +0000 (0:00:01.129) 0:05:02.712 ******** 2026-04-09 05:16:37.088513 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-09 05:16:37.088528 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-09 05:16:37.088540 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-09 05:16:37.088551 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-09 05:16:37.088564 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-09 05:16:37.088576 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}])  2026-04-09 05:16:37.088589 | orchestrator | 2026-04-09 05:16:37.088602 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-04-09 05:16:37.088636 | orchestrator | 2026-04-09 05:16:37.088649 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-04-09 05:16:37.088661 | orchestrator | Thursday 09 April 2026 05:16:11 +0000 (0:00:10.608) 0:05:13.321 ******** 2026-04-09 05:16:37.088672 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:16:37.088683 | orchestrator | 2026-04-09 05:16:37.088694 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-04-09 05:16:37.088720 | orchestrator | Thursday 09 April 2026 05:16:12 +0000 (0:00:01.539) 0:05:14.860 ******** 2026-04-09 05:16:37.088732 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:16:37.088743 | orchestrator | 2026-04-09 05:16:37.088754 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-04-09 05:16:37.088765 | orchestrator | Thursday 09 April 2026 05:16:14 +0000 (0:00:01.128) 0:05:15.989 ******** 2026-04-09 05:16:37.088804 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:16:37.088817 | orchestrator | 2026-04-09 05:16:37.088830 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-04-09 05:16:37.088845 | orchestrator | Thursday 09 April 2026 05:16:15 +0000 (0:00:01.117) 0:05:17.107 ******** 2026-04-09 05:16:37.088859 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:16:37.088874 | orchestrator | 2026-04-09 05:16:37.088888 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 05:16:37.088903 | orchestrator | Thursday 09 April 2026 05:16:16 +0000 (0:00:01.145) 0:05:18.252 ******** 2026-04-09 05:16:37.088916 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-04-09 05:16:37.088930 | orchestrator | 2026-04-09 05:16:37.088944 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-09 05:16:37.088958 | orchestrator | Thursday 09 April 2026 05:16:17 +0000 (0:00:01.123) 0:05:19.375 ******** 2026-04-09 05:16:37.088991 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:16:37.089005 | orchestrator | 2026-04-09 05:16:37.089019 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-09 05:16:37.089033 | orchestrator | Thursday 09 April 2026 05:16:18 +0000 (0:00:01.465) 0:05:20.841 ******** 2026-04-09 05:16:37.089047 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:16:37.089060 | orchestrator | 2026-04-09 05:16:37.089074 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 05:16:37.089087 | orchestrator | Thursday 09 April 2026 05:16:20 +0000 (0:00:01.167) 0:05:22.008 ******** 2026-04-09 05:16:37.089101 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:16:37.089114 | orchestrator | 2026-04-09 05:16:37.089128 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 05:16:37.089142 | orchestrator | Thursday 09 April 2026 05:16:21 +0000 (0:00:01.463) 0:05:23.472 ******** 2026-04-09 05:16:37.089156 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:16:37.089169 | orchestrator | 2026-04-09 05:16:37.089183 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-09 05:16:37.089194 | orchestrator | Thursday 09 April 2026 05:16:22 +0000 (0:00:01.143) 0:05:24.616 ******** 2026-04-09 05:16:37.089206 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:16:37.089217 | orchestrator | 2026-04-09 05:16:37.089229 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-09 05:16:37.089240 | orchestrator | Thursday 09 April 2026 05:16:23 +0000 (0:00:01.139) 0:05:25.756 ******** 2026-04-09 05:16:37.089251 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:16:37.089262 | orchestrator | 2026-04-09 05:16:37.089274 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-09 05:16:37.089286 | orchestrator | Thursday 09 April 2026 05:16:25 +0000 (0:00:01.140) 0:05:26.897 ******** 2026-04-09 05:16:37.089297 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:16:37.089308 | orchestrator | 2026-04-09 05:16:37.089320 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-09 05:16:37.089331 | orchestrator | Thursday 09 April 2026 05:16:26 +0000 (0:00:01.318) 0:05:28.215 ******** 2026-04-09 05:16:37.089352 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:16:37.089364 | orchestrator | 2026-04-09 05:16:37.089376 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-09 05:16:37.089387 | orchestrator | Thursday 09 April 2026 05:16:27 +0000 (0:00:01.189) 0:05:29.405 ******** 2026-04-09 05:16:37.089399 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 05:16:37.089411 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:16:37.089422 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:16:37.089433 | orchestrator | 2026-04-09 05:16:37.089445 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-09 05:16:37.089456 | orchestrator | Thursday 09 April 2026 05:16:29 +0000 (0:00:01.694) 0:05:31.099 ******** 2026-04-09 05:16:37.089468 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:16:37.089478 | orchestrator | 2026-04-09 05:16:37.089490 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-09 05:16:37.089501 | orchestrator | Thursday 09 April 2026 05:16:30 +0000 (0:00:01.249) 0:05:32.349 ******** 2026-04-09 05:16:37.089513 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 05:16:37.089524 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:16:37.089536 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:16:37.089546 | orchestrator | 2026-04-09 05:16:37.089558 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-09 05:16:37.089569 | orchestrator | Thursday 09 April 2026 05:16:33 +0000 (0:00:03.164) 0:05:35.514 ******** 2026-04-09 05:16:37.089581 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 05:16:37.089592 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 05:16:37.089604 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 05:16:37.089614 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:16:37.089626 | orchestrator | 2026-04-09 05:16:37.089637 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-09 05:16:37.089649 | orchestrator | Thursday 09 April 2026 05:16:35 +0000 (0:00:01.410) 0:05:36.925 ******** 2026-04-09 05:16:37.089662 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 05:16:37.089676 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 05:16:37.089688 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 05:16:37.089700 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:16:37.089711 | orchestrator | 2026-04-09 05:16:37.089723 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-09 05:16:37.089734 | orchestrator | Thursday 09 April 2026 05:16:37 +0000 (0:00:01.959) 0:05:38.884 ******** 2026-04-09 05:16:37.089754 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:16:57.127316 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:16:57.127511 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:16:57.127545 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:16:57.127567 | orchestrator | 2026-04-09 05:16:57.127589 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-09 05:16:57.127610 | orchestrator | Thursday 09 April 2026 05:16:38 +0000 (0:00:01.161) 0:05:40.045 ******** 2026-04-09 05:16:57.127632 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '3b46de499f20', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-09 05:16:31.025180', 'end': '2026-04-09 05:16:31.062272', 'delta': '0:00:00.037092', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3b46de499f20'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-09 05:16:57.127652 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '344b9fc03006', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-09 05:16:31.572056', 'end': '2026-04-09 05:16:31.622881', 'delta': '0:00:00.050825', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['344b9fc03006'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-09 05:16:57.127673 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '66330ed4242e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-09 05:16:32.435652', 'end': '2026-04-09 05:16:32.491259', 'delta': '0:00:00.055607', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['66330ed4242e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-09 05:16:57.127686 | orchestrator | 2026-04-09 05:16:57.127700 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-09 05:16:57.127714 | orchestrator | Thursday 09 April 2026 05:16:39 +0000 (0:00:01.248) 0:05:41.294 ******** 2026-04-09 05:16:57.127766 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:16:57.127781 | orchestrator | 2026-04-09 05:16:57.127794 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-09 05:16:57.127807 | orchestrator | Thursday 09 April 2026 05:16:41 +0000 (0:00:01.580) 0:05:42.874 ******** 2026-04-09 05:16:57.127820 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:16:57.127853 | orchestrator | 2026-04-09 05:16:57.127873 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-09 05:16:57.127891 | orchestrator | Thursday 09 April 2026 05:16:42 +0000 (0:00:01.258) 0:05:44.132 ******** 2026-04-09 05:16:57.127904 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:16:57.127916 | orchestrator | 2026-04-09 05:16:57.127929 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-09 05:16:57.127942 | orchestrator | Thursday 09 April 2026 05:16:43 +0000 (0:00:01.136) 0:05:45.269 ******** 2026-04-09 05:16:57.127978 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-04-09 05:16:57.127992 | orchestrator | 2026-04-09 05:16:57.128005 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 05:16:57.128018 | orchestrator | Thursday 09 April 2026 05:16:45 +0000 (0:00:02.078) 0:05:47.347 ******** 2026-04-09 05:16:57.128031 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:16:57.128044 | orchestrator | 2026-04-09 05:16:57.128057 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-09 05:16:57.128076 | orchestrator | Thursday 09 April 2026 05:16:46 +0000 (0:00:01.114) 0:05:48.462 ******** 2026-04-09 05:16:57.128089 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:16:57.128100 | orchestrator | 2026-04-09 05:16:57.128111 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-09 05:16:57.128122 | orchestrator | Thursday 09 April 2026 05:16:47 +0000 (0:00:01.122) 0:05:49.585 ******** 2026-04-09 05:16:57.128133 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:16:57.128144 | orchestrator | 2026-04-09 05:16:57.128187 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 05:16:57.128200 | orchestrator | Thursday 09 April 2026 05:16:48 +0000 (0:00:01.269) 0:05:50.854 ******** 2026-04-09 05:16:57.128211 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:16:57.128223 | orchestrator | 2026-04-09 05:16:57.128234 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-09 05:16:57.128286 | orchestrator | Thursday 09 April 2026 05:16:50 +0000 (0:00:01.119) 0:05:51.973 ******** 2026-04-09 05:16:57.128301 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:16:57.128316 | orchestrator | 2026-04-09 05:16:57.128333 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-09 05:16:57.128352 | orchestrator | Thursday 09 April 2026 05:16:51 +0000 (0:00:01.126) 0:05:53.099 ******** 2026-04-09 05:16:57.128364 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:16:57.128400 | orchestrator | 2026-04-09 05:16:57.128411 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-09 05:16:57.128422 | orchestrator | Thursday 09 April 2026 05:16:52 +0000 (0:00:01.128) 0:05:54.228 ******** 2026-04-09 05:16:57.128433 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:16:57.128444 | orchestrator | 2026-04-09 05:16:57.128455 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-09 05:16:57.128466 | orchestrator | Thursday 09 April 2026 05:16:53 +0000 (0:00:01.208) 0:05:55.437 ******** 2026-04-09 05:16:57.128477 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:16:57.128488 | orchestrator | 2026-04-09 05:16:57.128499 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-09 05:16:57.128510 | orchestrator | Thursday 09 April 2026 05:16:54 +0000 (0:00:01.125) 0:05:56.562 ******** 2026-04-09 05:16:57.128520 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:16:57.128531 | orchestrator | 2026-04-09 05:16:57.128542 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-09 05:16:57.128554 | orchestrator | Thursday 09 April 2026 05:16:55 +0000 (0:00:01.173) 0:05:57.735 ******** 2026-04-09 05:16:57.128565 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:16:57.128576 | orchestrator | 2026-04-09 05:16:57.128587 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-09 05:16:57.128598 | orchestrator | Thursday 09 April 2026 05:16:57 +0000 (0:00:01.143) 0:05:58.879 ******** 2026-04-09 05:16:57.128610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:16:57.128652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:16:57.128664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:16:57.128677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-17-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-09 05:16:57.128699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:16:58.384186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:16:58.384307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:16:58.384358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '78f51fbd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part16', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part14', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part15', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part1', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 05:16:58.384400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:16:58.384413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:16:58.384425 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:16:58.384438 | orchestrator | 2026-04-09 05:16:58.384451 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-09 05:16:58.384463 | orchestrator | Thursday 09 April 2026 05:16:58 +0000 (0:00:01.240) 0:06:00.119 ******** 2026-04-09 05:16:58.384494 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:16:58.384508 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:16:58.384521 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:16:58.384542 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-17-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:16:58.384560 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:16:58.384573 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:16:58.384593 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:17:23.674309 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '78f51fbd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part16', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part14', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part15', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part1', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:17:23.674485 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:17:23.674515 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:17:23.674530 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:17:23.674543 | orchestrator | 2026-04-09 05:17:23.674556 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-09 05:17:23.674568 | orchestrator | Thursday 09 April 2026 05:16:59 +0000 (0:00:01.298) 0:06:01.418 ******** 2026-04-09 05:17:23.674579 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:17:23.674591 | orchestrator | 2026-04-09 05:17:23.674602 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-09 05:17:23.674613 | orchestrator | Thursday 09 April 2026 05:17:01 +0000 (0:00:01.541) 0:06:02.960 ******** 2026-04-09 05:17:23.674624 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:17:23.674635 | orchestrator | 2026-04-09 05:17:23.674671 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 05:17:23.674703 | orchestrator | Thursday 09 April 2026 05:17:02 +0000 (0:00:01.134) 0:06:04.094 ******** 2026-04-09 05:17:23.674715 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:17:23.674726 | orchestrator | 2026-04-09 05:17:23.674737 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 05:17:23.674748 | orchestrator | Thursday 09 April 2026 05:17:03 +0000 (0:00:01.520) 0:06:05.615 ******** 2026-04-09 05:17:23.674759 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:17:23.674770 | orchestrator | 2026-04-09 05:17:23.674790 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 05:17:23.674802 | orchestrator | Thursday 09 April 2026 05:17:04 +0000 (0:00:01.182) 0:06:06.797 ******** 2026-04-09 05:17:23.674813 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:17:23.674824 | orchestrator | 2026-04-09 05:17:23.674835 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 05:17:23.674845 | orchestrator | Thursday 09 April 2026 05:17:06 +0000 (0:00:01.257) 0:06:08.055 ******** 2026-04-09 05:17:23.674857 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:17:23.674868 | orchestrator | 2026-04-09 05:17:23.674879 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 05:17:23.674890 | orchestrator | Thursday 09 April 2026 05:17:07 +0000 (0:00:01.146) 0:06:09.201 ******** 2026-04-09 05:17:23.674902 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 05:17:23.674913 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-09 05:17:23.674924 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-09 05:17:23.674935 | orchestrator | 2026-04-09 05:17:23.674946 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 05:17:23.674957 | orchestrator | Thursday 09 April 2026 05:17:09 +0000 (0:00:02.056) 0:06:11.258 ******** 2026-04-09 05:17:23.674968 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 05:17:23.674979 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 05:17:23.674991 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 05:17:23.675002 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:17:23.675013 | orchestrator | 2026-04-09 05:17:23.675024 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-09 05:17:23.675035 | orchestrator | Thursday 09 April 2026 05:17:10 +0000 (0:00:01.213) 0:06:12.471 ******** 2026-04-09 05:17:23.675045 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:17:23.675056 | orchestrator | 2026-04-09 05:17:23.675067 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-09 05:17:23.675078 | orchestrator | Thursday 09 April 2026 05:17:11 +0000 (0:00:01.129) 0:06:13.601 ******** 2026-04-09 05:17:23.675089 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 05:17:23.675101 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:17:23.675112 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:17:23.675123 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 05:17:23.675134 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 05:17:23.675145 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 05:17:23.675164 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 05:17:23.675176 | orchestrator | 2026-04-09 05:17:23.675187 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-09 05:17:23.675198 | orchestrator | Thursday 09 April 2026 05:17:13 +0000 (0:00:02.114) 0:06:15.716 ******** 2026-04-09 05:17:23.675209 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 05:17:23.675220 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:17:23.675231 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:17:23.675242 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 05:17:23.675253 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 05:17:23.675264 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 05:17:23.675274 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 05:17:23.675292 | orchestrator | 2026-04-09 05:17:23.675303 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-04-09 05:17:23.675314 | orchestrator | Thursday 09 April 2026 05:17:16 +0000 (0:00:02.864) 0:06:18.581 ******** 2026-04-09 05:17:23.675325 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-04-09 05:17:23.675336 | orchestrator | 2026-04-09 05:17:23.675354 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-04-09 05:17:23.675371 | orchestrator | Thursday 09 April 2026 05:17:18 +0000 (0:00:02.284) 0:06:20.865 ******** 2026-04-09 05:17:23.675383 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:17:23.675394 | orchestrator | 2026-04-09 05:17:23.675404 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-04-09 05:17:23.675415 | orchestrator | Thursday 09 April 2026 05:17:20 +0000 (0:00:01.249) 0:06:22.115 ******** 2026-04-09 05:17:23.675426 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:17:23.675437 | orchestrator | 2026-04-09 05:17:23.675448 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-04-09 05:17:23.675459 | orchestrator | Thursday 09 April 2026 05:17:21 +0000 (0:00:01.130) 0:06:23.246 ******** 2026-04-09 05:17:23.675470 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-04-09 05:17:23.675481 | orchestrator | 2026-04-09 05:17:23.675492 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-04-09 05:17:23.675522 | orchestrator | Thursday 09 April 2026 05:17:23 +0000 (0:00:02.284) 0:06:25.530 ******** 2026-04-09 05:18:25.855204 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:18:25.855333 | orchestrator | 2026-04-09 05:18:25.855356 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-04-09 05:18:25.855374 | orchestrator | Thursday 09 April 2026 05:17:24 +0000 (0:00:01.135) 0:06:26.666 ******** 2026-04-09 05:18:25.855390 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 05:18:25.855406 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:18:25.855423 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:18:25.855438 | orchestrator | 2026-04-09 05:18:25.855524 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-04-09 05:18:25.855546 | orchestrator | Thursday 09 April 2026 05:17:27 +0000 (0:00:02.474) 0:06:29.141 ******** 2026-04-09 05:18:25.855561 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-04-09 05:18:25.855576 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-04-09 05:18:25.855593 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-04-09 05:18:25.855609 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-04-09 05:18:25.855625 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-04-09 05:18:25.855641 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-04-09 05:18:25.855657 | orchestrator | 2026-04-09 05:18:25.855673 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-04-09 05:18:25.855688 | orchestrator | Thursday 09 April 2026 05:17:41 +0000 (0:00:13.838) 0:06:42.980 ******** 2026-04-09 05:18:25.855704 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 05:18:25.855721 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 05:18:25.855737 | orchestrator | 2026-04-09 05:18:25.855752 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-04-09 05:18:25.855767 | orchestrator | Thursday 09 April 2026 05:17:45 +0000 (0:00:03.928) 0:06:46.908 ******** 2026-04-09 05:18:25.855783 | orchestrator | changed: [testbed-node-0] 2026-04-09 05:18:25.855798 | orchestrator | 2026-04-09 05:18:25.855813 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 05:18:25.855860 | orchestrator | Thursday 09 April 2026 05:17:47 +0000 (0:00:02.464) 0:06:49.373 ******** 2026-04-09 05:18:25.855877 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-04-09 05:18:25.855894 | orchestrator | 2026-04-09 05:18:25.855909 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 05:18:25.855924 | orchestrator | Thursday 09 April 2026 05:17:48 +0000 (0:00:01.454) 0:06:50.827 ******** 2026-04-09 05:18:25.855939 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-04-09 05:18:25.855955 | orchestrator | 2026-04-09 05:18:25.855986 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 05:18:25.855996 | orchestrator | Thursday 09 April 2026 05:17:50 +0000 (0:00:01.602) 0:06:52.429 ******** 2026-04-09 05:18:25.856005 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:18:25.856015 | orchestrator | 2026-04-09 05:18:25.856024 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 05:18:25.856033 | orchestrator | Thursday 09 April 2026 05:17:52 +0000 (0:00:01.561) 0:06:53.991 ******** 2026-04-09 05:18:25.856041 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:18:25.856050 | orchestrator | 2026-04-09 05:18:25.856059 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 05:18:25.856068 | orchestrator | Thursday 09 April 2026 05:17:53 +0000 (0:00:01.111) 0:06:55.103 ******** 2026-04-09 05:18:25.856077 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:18:25.856085 | orchestrator | 2026-04-09 05:18:25.856094 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 05:18:25.856103 | orchestrator | Thursday 09 April 2026 05:17:54 +0000 (0:00:01.216) 0:06:56.320 ******** 2026-04-09 05:18:25.856112 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:18:25.856120 | orchestrator | 2026-04-09 05:18:25.856129 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 05:18:25.856138 | orchestrator | Thursday 09 April 2026 05:17:55 +0000 (0:00:01.171) 0:06:57.492 ******** 2026-04-09 05:18:25.856147 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:18:25.856156 | orchestrator | 2026-04-09 05:18:25.856164 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 05:18:25.856173 | orchestrator | Thursday 09 April 2026 05:17:57 +0000 (0:00:01.592) 0:06:59.085 ******** 2026-04-09 05:18:25.856182 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:18:25.856190 | orchestrator | 2026-04-09 05:18:25.856199 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 05:18:25.856208 | orchestrator | Thursday 09 April 2026 05:17:58 +0000 (0:00:01.188) 0:07:00.274 ******** 2026-04-09 05:18:25.856217 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:18:25.856225 | orchestrator | 2026-04-09 05:18:25.856234 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 05:18:25.856243 | orchestrator | Thursday 09 April 2026 05:17:59 +0000 (0:00:01.188) 0:07:01.462 ******** 2026-04-09 05:18:25.856251 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:18:25.856260 | orchestrator | 2026-04-09 05:18:25.856269 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 05:18:25.856278 | orchestrator | Thursday 09 April 2026 05:18:01 +0000 (0:00:01.640) 0:07:03.103 ******** 2026-04-09 05:18:25.856287 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:18:25.856296 | orchestrator | 2026-04-09 05:18:25.856328 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 05:18:25.856344 | orchestrator | Thursday 09 April 2026 05:18:02 +0000 (0:00:01.555) 0:07:04.659 ******** 2026-04-09 05:18:25.856360 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:18:25.856376 | orchestrator | 2026-04-09 05:18:25.856391 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 05:18:25.856406 | orchestrator | Thursday 09 April 2026 05:18:03 +0000 (0:00:01.163) 0:07:05.823 ******** 2026-04-09 05:18:25.856433 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:18:25.856448 | orchestrator | 2026-04-09 05:18:25.856489 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 05:18:25.856505 | orchestrator | Thursday 09 April 2026 05:18:05 +0000 (0:00:01.160) 0:07:06.984 ******** 2026-04-09 05:18:25.856521 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:18:25.856535 | orchestrator | 2026-04-09 05:18:25.856548 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 05:18:25.856561 | orchestrator | Thursday 09 April 2026 05:18:06 +0000 (0:00:01.111) 0:07:08.095 ******** 2026-04-09 05:18:25.856575 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:18:25.856588 | orchestrator | 2026-04-09 05:18:25.856602 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 05:18:25.856615 | orchestrator | Thursday 09 April 2026 05:18:07 +0000 (0:00:01.170) 0:07:09.266 ******** 2026-04-09 05:18:25.856630 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:18:25.856645 | orchestrator | 2026-04-09 05:18:25.856660 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 05:18:25.856675 | orchestrator | Thursday 09 April 2026 05:18:08 +0000 (0:00:01.128) 0:07:10.394 ******** 2026-04-09 05:18:25.856689 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:18:25.856704 | orchestrator | 2026-04-09 05:18:25.856717 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 05:18:25.856727 | orchestrator | Thursday 09 April 2026 05:18:09 +0000 (0:00:01.131) 0:07:11.525 ******** 2026-04-09 05:18:25.856735 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:18:25.856744 | orchestrator | 2026-04-09 05:18:25.856753 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 05:18:25.856761 | orchestrator | Thursday 09 April 2026 05:18:10 +0000 (0:00:01.172) 0:07:12.698 ******** 2026-04-09 05:18:25.856770 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:18:25.856779 | orchestrator | 2026-04-09 05:18:25.856788 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 05:18:25.856796 | orchestrator | Thursday 09 April 2026 05:18:11 +0000 (0:00:01.138) 0:07:13.837 ******** 2026-04-09 05:18:25.856805 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:18:25.856814 | orchestrator | 2026-04-09 05:18:25.856822 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 05:18:25.856831 | orchestrator | Thursday 09 April 2026 05:18:13 +0000 (0:00:01.168) 0:07:15.005 ******** 2026-04-09 05:18:25.856840 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:18:25.856848 | orchestrator | 2026-04-09 05:18:25.856857 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-09 05:18:25.856866 | orchestrator | Thursday 09 April 2026 05:18:14 +0000 (0:00:01.129) 0:07:16.135 ******** 2026-04-09 05:18:25.856874 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:18:25.856883 | orchestrator | 2026-04-09 05:18:25.856892 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-09 05:18:25.856908 | orchestrator | Thursday 09 April 2026 05:18:15 +0000 (0:00:01.106) 0:07:17.241 ******** 2026-04-09 05:18:25.856917 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:18:25.856926 | orchestrator | 2026-04-09 05:18:25.856934 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-09 05:18:25.856943 | orchestrator | Thursday 09 April 2026 05:18:16 +0000 (0:00:01.117) 0:07:18.358 ******** 2026-04-09 05:18:25.856952 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:18:25.856960 | orchestrator | 2026-04-09 05:18:25.856969 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-09 05:18:25.856977 | orchestrator | Thursday 09 April 2026 05:18:17 +0000 (0:00:01.163) 0:07:19.522 ******** 2026-04-09 05:18:25.856986 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:18:25.856995 | orchestrator | 2026-04-09 05:18:25.857003 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-09 05:18:25.857012 | orchestrator | Thursday 09 April 2026 05:18:18 +0000 (0:00:01.213) 0:07:20.735 ******** 2026-04-09 05:18:25.857032 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:18:25.857041 | orchestrator | 2026-04-09 05:18:25.857049 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-09 05:18:25.857058 | orchestrator | Thursday 09 April 2026 05:18:20 +0000 (0:00:01.179) 0:07:21.915 ******** 2026-04-09 05:18:25.857067 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:18:25.857075 | orchestrator | 2026-04-09 05:18:25.857084 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-09 05:18:25.857093 | orchestrator | Thursday 09 April 2026 05:18:21 +0000 (0:00:01.178) 0:07:23.093 ******** 2026-04-09 05:18:25.857101 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:18:25.857110 | orchestrator | 2026-04-09 05:18:25.857119 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-09 05:18:25.857127 | orchestrator | Thursday 09 April 2026 05:18:22 +0000 (0:00:01.151) 0:07:24.245 ******** 2026-04-09 05:18:25.857136 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:18:25.857148 | orchestrator | 2026-04-09 05:18:25.857157 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-09 05:18:25.857166 | orchestrator | Thursday 09 April 2026 05:18:23 +0000 (0:00:01.156) 0:07:25.402 ******** 2026-04-09 05:18:25.857175 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:18:25.857183 | orchestrator | 2026-04-09 05:18:25.857192 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-09 05:18:25.857201 | orchestrator | Thursday 09 April 2026 05:18:24 +0000 (0:00:01.159) 0:07:26.561 ******** 2026-04-09 05:18:25.857209 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:18:25.857218 | orchestrator | 2026-04-09 05:18:25.857227 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-09 05:18:25.857236 | orchestrator | Thursday 09 April 2026 05:18:25 +0000 (0:00:01.152) 0:07:27.714 ******** 2026-04-09 05:19:16.507524 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:19:16.507645 | orchestrator | 2026-04-09 05:19:16.507663 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-09 05:19:16.507678 | orchestrator | Thursday 09 April 2026 05:18:26 +0000 (0:00:01.120) 0:07:28.834 ******** 2026-04-09 05:19:16.507690 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:19:16.507702 | orchestrator | 2026-04-09 05:19:16.507714 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-09 05:19:16.507726 | orchestrator | Thursday 09 April 2026 05:18:28 +0000 (0:00:01.159) 0:07:29.994 ******** 2026-04-09 05:19:16.507738 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:19:16.507750 | orchestrator | 2026-04-09 05:19:16.507762 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-09 05:19:16.507775 | orchestrator | Thursday 09 April 2026 05:18:30 +0000 (0:00:01.954) 0:07:31.948 ******** 2026-04-09 05:19:16.507786 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:19:16.507798 | orchestrator | 2026-04-09 05:19:16.507810 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-09 05:19:16.507822 | orchestrator | Thursday 09 April 2026 05:18:32 +0000 (0:00:02.434) 0:07:34.382 ******** 2026-04-09 05:19:16.507834 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-04-09 05:19:16.507847 | orchestrator | 2026-04-09 05:19:16.507858 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-09 05:19:16.507871 | orchestrator | Thursday 09 April 2026 05:18:33 +0000 (0:00:01.479) 0:07:35.862 ******** 2026-04-09 05:19:16.507883 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:19:16.507894 | orchestrator | 2026-04-09 05:19:16.507906 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-09 05:19:16.507918 | orchestrator | Thursday 09 April 2026 05:18:35 +0000 (0:00:01.163) 0:07:37.025 ******** 2026-04-09 05:19:16.507929 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:19:16.507941 | orchestrator | 2026-04-09 05:19:16.507953 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-09 05:19:16.507965 | orchestrator | Thursday 09 April 2026 05:18:36 +0000 (0:00:01.146) 0:07:38.172 ******** 2026-04-09 05:19:16.508002 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 05:19:16.508015 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 05:19:16.508028 | orchestrator | 2026-04-09 05:19:16.508039 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-09 05:19:16.508050 | orchestrator | Thursday 09 April 2026 05:18:38 +0000 (0:00:01.882) 0:07:40.054 ******** 2026-04-09 05:19:16.508064 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:19:16.508077 | orchestrator | 2026-04-09 05:19:16.508090 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-09 05:19:16.508104 | orchestrator | Thursday 09 April 2026 05:18:39 +0000 (0:00:01.684) 0:07:41.739 ******** 2026-04-09 05:19:16.508117 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:19:16.508129 | orchestrator | 2026-04-09 05:19:16.508142 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-09 05:19:16.508156 | orchestrator | Thursday 09 April 2026 05:18:41 +0000 (0:00:01.253) 0:07:42.992 ******** 2026-04-09 05:19:16.508169 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:19:16.508182 | orchestrator | 2026-04-09 05:19:16.508222 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-09 05:19:16.508243 | orchestrator | Thursday 09 April 2026 05:18:42 +0000 (0:00:01.148) 0:07:44.140 ******** 2026-04-09 05:19:16.508261 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:19:16.508280 | orchestrator | 2026-04-09 05:19:16.508297 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-09 05:19:16.508316 | orchestrator | Thursday 09 April 2026 05:18:43 +0000 (0:00:01.169) 0:07:45.309 ******** 2026-04-09 05:19:16.508363 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-04-09 05:19:16.508383 | orchestrator | 2026-04-09 05:19:16.508402 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-09 05:19:16.508422 | orchestrator | Thursday 09 April 2026 05:18:44 +0000 (0:00:01.523) 0:07:46.834 ******** 2026-04-09 05:19:16.508440 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:19:16.508457 | orchestrator | 2026-04-09 05:19:16.508469 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-09 05:19:16.508480 | orchestrator | Thursday 09 April 2026 05:18:46 +0000 (0:00:01.918) 0:07:48.752 ******** 2026-04-09 05:19:16.508491 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 05:19:16.508502 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 05:19:16.508513 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 05:19:16.508524 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:19:16.508534 | orchestrator | 2026-04-09 05:19:16.508545 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-09 05:19:16.508556 | orchestrator | Thursday 09 April 2026 05:18:48 +0000 (0:00:01.152) 0:07:49.905 ******** 2026-04-09 05:19:16.508567 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:19:16.508578 | orchestrator | 2026-04-09 05:19:16.508588 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-09 05:19:16.508599 | orchestrator | Thursday 09 April 2026 05:18:49 +0000 (0:00:01.125) 0:07:51.031 ******** 2026-04-09 05:19:16.508610 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:19:16.508621 | orchestrator | 2026-04-09 05:19:16.508632 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-09 05:19:16.508643 | orchestrator | Thursday 09 April 2026 05:18:50 +0000 (0:00:01.196) 0:07:52.227 ******** 2026-04-09 05:19:16.508653 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:19:16.508664 | orchestrator | 2026-04-09 05:19:16.508675 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-09 05:19:16.508710 | orchestrator | Thursday 09 April 2026 05:18:51 +0000 (0:00:01.169) 0:07:53.397 ******** 2026-04-09 05:19:16.508745 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:19:16.508763 | orchestrator | 2026-04-09 05:19:16.508780 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-09 05:19:16.508791 | orchestrator | Thursday 09 April 2026 05:18:52 +0000 (0:00:01.130) 0:07:54.527 ******** 2026-04-09 05:19:16.508802 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:19:16.508813 | orchestrator | 2026-04-09 05:19:16.508824 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-09 05:19:16.508835 | orchestrator | Thursday 09 April 2026 05:18:53 +0000 (0:00:01.122) 0:07:55.650 ******** 2026-04-09 05:19:16.508846 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:19:16.508857 | orchestrator | 2026-04-09 05:19:16.508867 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-09 05:19:16.508879 | orchestrator | Thursday 09 April 2026 05:18:56 +0000 (0:00:02.615) 0:07:58.266 ******** 2026-04-09 05:19:16.508889 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:19:16.508900 | orchestrator | 2026-04-09 05:19:16.508911 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-09 05:19:16.508922 | orchestrator | Thursday 09 April 2026 05:18:57 +0000 (0:00:01.225) 0:07:59.492 ******** 2026-04-09 05:19:16.508933 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-04-09 05:19:16.508944 | orchestrator | 2026-04-09 05:19:16.508955 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-09 05:19:16.508965 | orchestrator | Thursday 09 April 2026 05:18:59 +0000 (0:00:01.484) 0:08:00.976 ******** 2026-04-09 05:19:16.508976 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:19:16.508987 | orchestrator | 2026-04-09 05:19:16.508998 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-09 05:19:16.509009 | orchestrator | Thursday 09 April 2026 05:19:00 +0000 (0:00:01.217) 0:08:02.194 ******** 2026-04-09 05:19:16.509020 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:19:16.509031 | orchestrator | 2026-04-09 05:19:16.509042 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-09 05:19:16.509052 | orchestrator | Thursday 09 April 2026 05:19:01 +0000 (0:00:01.133) 0:08:03.327 ******** 2026-04-09 05:19:16.509063 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:19:16.509074 | orchestrator | 2026-04-09 05:19:16.509085 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-09 05:19:16.509096 | orchestrator | Thursday 09 April 2026 05:19:02 +0000 (0:00:01.154) 0:08:04.482 ******** 2026-04-09 05:19:16.509106 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:19:16.509117 | orchestrator | 2026-04-09 05:19:16.509128 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-09 05:19:16.509139 | orchestrator | Thursday 09 April 2026 05:19:03 +0000 (0:00:01.151) 0:08:05.634 ******** 2026-04-09 05:19:16.509150 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:19:16.509160 | orchestrator | 2026-04-09 05:19:16.509176 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-09 05:19:16.509194 | orchestrator | Thursday 09 April 2026 05:19:04 +0000 (0:00:01.140) 0:08:06.775 ******** 2026-04-09 05:19:16.509213 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:19:16.509231 | orchestrator | 2026-04-09 05:19:16.509248 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-09 05:19:16.509276 | orchestrator | Thursday 09 April 2026 05:19:06 +0000 (0:00:01.127) 0:08:07.903 ******** 2026-04-09 05:19:16.509297 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:19:16.509315 | orchestrator | 2026-04-09 05:19:16.509379 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-09 05:19:16.509391 | orchestrator | Thursday 09 April 2026 05:19:07 +0000 (0:00:01.193) 0:08:09.096 ******** 2026-04-09 05:19:16.509402 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:19:16.509413 | orchestrator | 2026-04-09 05:19:16.509424 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-09 05:19:16.509445 | orchestrator | Thursday 09 April 2026 05:19:08 +0000 (0:00:01.181) 0:08:10.277 ******** 2026-04-09 05:19:16.509456 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:19:16.509467 | orchestrator | 2026-04-09 05:19:16.509478 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-09 05:19:16.509489 | orchestrator | Thursday 09 April 2026 05:19:09 +0000 (0:00:01.155) 0:08:11.433 ******** 2026-04-09 05:19:16.509500 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-04-09 05:19:16.509511 | orchestrator | 2026-04-09 05:19:16.509522 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-09 05:19:16.509533 | orchestrator | Thursday 09 April 2026 05:19:11 +0000 (0:00:01.481) 0:08:12.914 ******** 2026-04-09 05:19:16.509544 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-04-09 05:19:16.509556 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-09 05:19:16.509567 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-09 05:19:16.509578 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-09 05:19:16.509589 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-09 05:19:16.509600 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-09 05:19:16.509611 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-09 05:19:16.509622 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-09 05:19:16.509633 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 05:19:16.509644 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 05:19:16.509655 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 05:19:16.509666 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 05:19:16.509677 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 05:19:16.509688 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 05:19:16.509709 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-04-09 05:20:04.788643 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-04-09 05:20:04.788762 | orchestrator | 2026-04-09 05:20:04.788780 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-09 05:20:04.788794 | orchestrator | Thursday 09 April 2026 05:19:17 +0000 (0:00:06.923) 0:08:19.838 ******** 2026-04-09 05:20:04.788806 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.788819 | orchestrator | 2026-04-09 05:20:04.788830 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-09 05:20:04.788842 | orchestrator | Thursday 09 April 2026 05:19:19 +0000 (0:00:01.127) 0:08:20.966 ******** 2026-04-09 05:20:04.788853 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.788864 | orchestrator | 2026-04-09 05:20:04.788875 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-09 05:20:04.788887 | orchestrator | Thursday 09 April 2026 05:19:20 +0000 (0:00:01.137) 0:08:22.103 ******** 2026-04-09 05:20:04.788898 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.788909 | orchestrator | 2026-04-09 05:20:04.788920 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-09 05:20:04.788931 | orchestrator | Thursday 09 April 2026 05:19:21 +0000 (0:00:01.132) 0:08:23.236 ******** 2026-04-09 05:20:04.788942 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.788953 | orchestrator | 2026-04-09 05:20:04.788964 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-09 05:20:04.788975 | orchestrator | Thursday 09 April 2026 05:19:22 +0000 (0:00:01.138) 0:08:24.374 ******** 2026-04-09 05:20:04.788986 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.788997 | orchestrator | 2026-04-09 05:20:04.789013 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-09 05:20:04.789033 | orchestrator | Thursday 09 April 2026 05:19:23 +0000 (0:00:01.206) 0:08:25.581 ******** 2026-04-09 05:20:04.789084 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.789104 | orchestrator | 2026-04-09 05:20:04.789124 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-09 05:20:04.789146 | orchestrator | Thursday 09 April 2026 05:19:24 +0000 (0:00:01.150) 0:08:26.731 ******** 2026-04-09 05:20:04.789165 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.789177 | orchestrator | 2026-04-09 05:20:04.789188 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-09 05:20:04.789250 | orchestrator | Thursday 09 April 2026 05:19:25 +0000 (0:00:01.115) 0:08:27.847 ******** 2026-04-09 05:20:04.789263 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.789274 | orchestrator | 2026-04-09 05:20:04.789286 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-09 05:20:04.789297 | orchestrator | Thursday 09 April 2026 05:19:27 +0000 (0:00:01.153) 0:08:29.000 ******** 2026-04-09 05:20:04.789308 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.789319 | orchestrator | 2026-04-09 05:20:04.789338 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-09 05:20:04.789357 | orchestrator | Thursday 09 April 2026 05:19:28 +0000 (0:00:01.118) 0:08:30.119 ******** 2026-04-09 05:20:04.789376 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.789394 | orchestrator | 2026-04-09 05:20:04.789433 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-09 05:20:04.789456 | orchestrator | Thursday 09 April 2026 05:19:29 +0000 (0:00:01.108) 0:08:31.228 ******** 2026-04-09 05:20:04.789474 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.789486 | orchestrator | 2026-04-09 05:20:04.789497 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-09 05:20:04.789508 | orchestrator | Thursday 09 April 2026 05:19:30 +0000 (0:00:01.134) 0:08:32.362 ******** 2026-04-09 05:20:04.789519 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.789530 | orchestrator | 2026-04-09 05:20:04.789541 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-09 05:20:04.789552 | orchestrator | Thursday 09 April 2026 05:19:31 +0000 (0:00:01.150) 0:08:33.513 ******** 2026-04-09 05:20:04.789563 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.789574 | orchestrator | 2026-04-09 05:20:04.789584 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-09 05:20:04.789595 | orchestrator | Thursday 09 April 2026 05:19:32 +0000 (0:00:01.256) 0:08:34.769 ******** 2026-04-09 05:20:04.789606 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.789617 | orchestrator | 2026-04-09 05:20:04.789628 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-09 05:20:04.789639 | orchestrator | Thursday 09 April 2026 05:19:34 +0000 (0:00:01.123) 0:08:35.893 ******** 2026-04-09 05:20:04.789651 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.789662 | orchestrator | 2026-04-09 05:20:04.789673 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-09 05:20:04.789684 | orchestrator | Thursday 09 April 2026 05:19:35 +0000 (0:00:01.289) 0:08:37.183 ******** 2026-04-09 05:20:04.789695 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.789706 | orchestrator | 2026-04-09 05:20:04.789717 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-09 05:20:04.789728 | orchestrator | Thursday 09 April 2026 05:19:36 +0000 (0:00:01.097) 0:08:38.280 ******** 2026-04-09 05:20:04.789739 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.789750 | orchestrator | 2026-04-09 05:20:04.789761 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 05:20:04.789773 | orchestrator | Thursday 09 April 2026 05:19:37 +0000 (0:00:01.120) 0:08:39.401 ******** 2026-04-09 05:20:04.789784 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.789795 | orchestrator | 2026-04-09 05:20:04.789806 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 05:20:04.789827 | orchestrator | Thursday 09 April 2026 05:19:38 +0000 (0:00:01.187) 0:08:40.589 ******** 2026-04-09 05:20:04.789839 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.789850 | orchestrator | 2026-04-09 05:20:04.789881 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 05:20:04.789893 | orchestrator | Thursday 09 April 2026 05:19:39 +0000 (0:00:01.135) 0:08:41.724 ******** 2026-04-09 05:20:04.789904 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.789915 | orchestrator | 2026-04-09 05:20:04.789926 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 05:20:04.789937 | orchestrator | Thursday 09 April 2026 05:19:40 +0000 (0:00:01.141) 0:08:42.865 ******** 2026-04-09 05:20:04.789948 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.789959 | orchestrator | 2026-04-09 05:20:04.789970 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 05:20:04.789981 | orchestrator | Thursday 09 April 2026 05:19:42 +0000 (0:00:01.137) 0:08:44.003 ******** 2026-04-09 05:20:04.789992 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-09 05:20:04.790003 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-09 05:20:04.790071 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-09 05:20:04.790084 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.790095 | orchestrator | 2026-04-09 05:20:04.790106 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 05:20:04.790117 | orchestrator | Thursday 09 April 2026 05:19:43 +0000 (0:00:01.748) 0:08:45.751 ******** 2026-04-09 05:20:04.790128 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-09 05:20:04.790139 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-09 05:20:04.790150 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-09 05:20:04.790165 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.790185 | orchestrator | 2026-04-09 05:20:04.790247 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 05:20:04.790267 | orchestrator | Thursday 09 April 2026 05:19:45 +0000 (0:00:01.444) 0:08:47.196 ******** 2026-04-09 05:20:04.790286 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-09 05:20:04.790305 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-09 05:20:04.790324 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-09 05:20:04.790343 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.790361 | orchestrator | 2026-04-09 05:20:04.790377 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 05:20:04.790394 | orchestrator | Thursday 09 April 2026 05:19:46 +0000 (0:00:01.401) 0:08:48.598 ******** 2026-04-09 05:20:04.790412 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.790430 | orchestrator | 2026-04-09 05:20:04.790446 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 05:20:04.790466 | orchestrator | Thursday 09 April 2026 05:19:47 +0000 (0:00:01.129) 0:08:49.728 ******** 2026-04-09 05:20:04.790484 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-09 05:20:04.790504 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.790523 | orchestrator | 2026-04-09 05:20:04.790541 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-09 05:20:04.790561 | orchestrator | Thursday 09 April 2026 05:19:49 +0000 (0:00:01.368) 0:08:51.097 ******** 2026-04-09 05:20:04.790579 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:20:04.790597 | orchestrator | 2026-04-09 05:20:04.790609 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-09 05:20:04.790628 | orchestrator | Thursday 09 April 2026 05:19:51 +0000 (0:00:01.840) 0:08:52.937 ******** 2026-04-09 05:20:04.790640 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:20:04.790651 | orchestrator | 2026-04-09 05:20:04.790662 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-09 05:20:04.790683 | orchestrator | Thursday 09 April 2026 05:19:52 +0000 (0:00:01.160) 0:08:54.098 ******** 2026-04-09 05:20:04.790695 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-04-09 05:20:04.790707 | orchestrator | 2026-04-09 05:20:04.790718 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-09 05:20:04.790729 | orchestrator | Thursday 09 April 2026 05:19:53 +0000 (0:00:01.526) 0:08:55.624 ******** 2026-04-09 05:20:04.790739 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-04-09 05:20:04.790750 | orchestrator | 2026-04-09 05:20:04.790761 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-09 05:20:04.790772 | orchestrator | Thursday 09 April 2026 05:19:57 +0000 (0:00:03.274) 0:08:58.898 ******** 2026-04-09 05:20:04.790783 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:20:04.790794 | orchestrator | 2026-04-09 05:20:04.790805 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-09 05:20:04.790815 | orchestrator | Thursday 09 April 2026 05:19:58 +0000 (0:00:01.125) 0:09:00.024 ******** 2026-04-09 05:20:04.790826 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:20:04.790837 | orchestrator | 2026-04-09 05:20:04.790848 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-09 05:20:04.790859 | orchestrator | Thursday 09 April 2026 05:19:59 +0000 (0:00:01.124) 0:09:01.148 ******** 2026-04-09 05:20:04.790870 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:20:04.790881 | orchestrator | 2026-04-09 05:20:04.790892 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-09 05:20:04.790903 | orchestrator | Thursday 09 April 2026 05:20:00 +0000 (0:00:01.322) 0:09:02.471 ******** 2026-04-09 05:20:04.790914 | orchestrator | changed: [testbed-node-0] 2026-04-09 05:20:04.790925 | orchestrator | 2026-04-09 05:20:04.790936 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-09 05:20:04.790947 | orchestrator | Thursday 09 April 2026 05:20:02 +0000 (0:00:02.048) 0:09:04.519 ******** 2026-04-09 05:20:04.790958 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:20:04.790968 | orchestrator | 2026-04-09 05:20:04.790979 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-09 05:20:04.790990 | orchestrator | Thursday 09 April 2026 05:20:04 +0000 (0:00:01.605) 0:09:06.125 ******** 2026-04-09 05:20:04.791001 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:20:04.791012 | orchestrator | 2026-04-09 05:20:04.791035 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-09 05:21:03.625895 | orchestrator | Thursday 09 April 2026 05:20:05 +0000 (0:00:01.519) 0:09:07.645 ******** 2026-04-09 05:21:03.626012 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:21:03.626143 | orchestrator | 2026-04-09 05:21:03.626174 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-09 05:21:03.626194 | orchestrator | Thursday 09 April 2026 05:20:07 +0000 (0:00:01.492) 0:09:09.137 ******** 2026-04-09 05:21:03.626214 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:21:03.626234 | orchestrator | 2026-04-09 05:21:03.626255 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-09 05:21:03.626275 | orchestrator | Thursday 09 April 2026 05:20:09 +0000 (0:00:01.744) 0:09:10.882 ******** 2026-04-09 05:21:03.626292 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:21:03.626303 | orchestrator | 2026-04-09 05:21:03.626315 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-09 05:21:03.626326 | orchestrator | Thursday 09 April 2026 05:20:10 +0000 (0:00:01.660) 0:09:12.543 ******** 2026-04-09 05:21:03.626337 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-09 05:21:03.626349 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 05:21:03.626361 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 05:21:03.626372 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-04-09 05:21:03.626383 | orchestrator | 2026-04-09 05:21:03.626421 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-09 05:21:03.626434 | orchestrator | Thursday 09 April 2026 05:20:14 +0000 (0:00:03.840) 0:09:16.383 ******** 2026-04-09 05:21:03.626447 | orchestrator | changed: [testbed-node-0] 2026-04-09 05:21:03.626461 | orchestrator | 2026-04-09 05:21:03.626473 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-09 05:21:03.626486 | orchestrator | Thursday 09 April 2026 05:20:16 +0000 (0:00:02.092) 0:09:18.476 ******** 2026-04-09 05:21:03.626499 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:21:03.626513 | orchestrator | 2026-04-09 05:21:03.626526 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-09 05:21:03.626539 | orchestrator | Thursday 09 April 2026 05:20:17 +0000 (0:00:01.149) 0:09:19.626 ******** 2026-04-09 05:21:03.626552 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:21:03.626565 | orchestrator | 2026-04-09 05:21:03.626578 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-09 05:21:03.626590 | orchestrator | Thursday 09 April 2026 05:20:18 +0000 (0:00:01.119) 0:09:20.745 ******** 2026-04-09 05:21:03.626603 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:21:03.626616 | orchestrator | 2026-04-09 05:21:03.626629 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-09 05:21:03.626642 | orchestrator | Thursday 09 April 2026 05:20:20 +0000 (0:00:02.101) 0:09:22.847 ******** 2026-04-09 05:21:03.626655 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:21:03.626669 | orchestrator | 2026-04-09 05:21:03.626681 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-09 05:21:03.626694 | orchestrator | Thursday 09 April 2026 05:20:22 +0000 (0:00:01.493) 0:09:24.340 ******** 2026-04-09 05:21:03.626707 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:21:03.626721 | orchestrator | 2026-04-09 05:21:03.626734 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-09 05:21:03.626761 | orchestrator | Thursday 09 April 2026 05:20:23 +0000 (0:00:01.135) 0:09:25.476 ******** 2026-04-09 05:21:03.626775 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-04-09 05:21:03.626790 | orchestrator | 2026-04-09 05:21:03.626801 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-09 05:21:03.626812 | orchestrator | Thursday 09 April 2026 05:20:25 +0000 (0:00:01.575) 0:09:27.051 ******** 2026-04-09 05:21:03.626822 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:21:03.626833 | orchestrator | 2026-04-09 05:21:03.626844 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-09 05:21:03.626855 | orchestrator | Thursday 09 April 2026 05:20:26 +0000 (0:00:01.151) 0:09:28.203 ******** 2026-04-09 05:21:03.626866 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:21:03.626876 | orchestrator | 2026-04-09 05:21:03.626887 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-09 05:21:03.626898 | orchestrator | Thursday 09 April 2026 05:20:27 +0000 (0:00:01.166) 0:09:29.370 ******** 2026-04-09 05:21:03.626909 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-04-09 05:21:03.626920 | orchestrator | 2026-04-09 05:21:03.626930 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-09 05:21:03.626941 | orchestrator | Thursday 09 April 2026 05:20:28 +0000 (0:00:01.461) 0:09:30.831 ******** 2026-04-09 05:21:03.626952 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:21:03.626963 | orchestrator | 2026-04-09 05:21:03.626973 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-09 05:21:03.626984 | orchestrator | Thursday 09 April 2026 05:20:31 +0000 (0:00:02.278) 0:09:33.110 ******** 2026-04-09 05:21:03.626995 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:21:03.627006 | orchestrator | 2026-04-09 05:21:03.627017 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-09 05:21:03.627027 | orchestrator | Thursday 09 April 2026 05:20:33 +0000 (0:00:01.949) 0:09:35.060 ******** 2026-04-09 05:21:03.627038 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:21:03.627057 | orchestrator | 2026-04-09 05:21:03.627068 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-09 05:21:03.627104 | orchestrator | Thursday 09 April 2026 05:20:35 +0000 (0:00:02.418) 0:09:37.478 ******** 2026-04-09 05:21:03.627116 | orchestrator | changed: [testbed-node-0] 2026-04-09 05:21:03.627127 | orchestrator | 2026-04-09 05:21:03.627138 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-09 05:21:03.627148 | orchestrator | Thursday 09 April 2026 05:20:38 +0000 (0:00:03.333) 0:09:40.812 ******** 2026-04-09 05:21:03.627159 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-04-09 05:21:03.627170 | orchestrator | 2026-04-09 05:21:03.627199 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-09 05:21:03.627215 | orchestrator | Thursday 09 April 2026 05:20:40 +0000 (0:00:01.623) 0:09:42.436 ******** 2026-04-09 05:21:03.627233 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:21:03.627253 | orchestrator | 2026-04-09 05:21:03.627273 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-09 05:21:03.627292 | orchestrator | Thursday 09 April 2026 05:20:42 +0000 (0:00:02.224) 0:09:44.660 ******** 2026-04-09 05:21:03.627309 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:21:03.627325 | orchestrator | 2026-04-09 05:21:03.627336 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-09 05:21:03.627347 | orchestrator | Thursday 09 April 2026 05:20:45 +0000 (0:00:03.126) 0:09:47.787 ******** 2026-04-09 05:21:03.627357 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:21:03.627368 | orchestrator | 2026-04-09 05:21:03.627379 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-09 05:21:03.627389 | orchestrator | Thursday 09 April 2026 05:20:47 +0000 (0:00:01.162) 0:09:48.950 ******** 2026-04-09 05:21:03.627402 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-09 05:21:03.627416 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-09 05:21:03.627427 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-09 05:21:03.627438 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-09 05:21:03.627456 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-09 05:21:03.627469 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}])  2026-04-09 05:21:03.627490 | orchestrator | 2026-04-09 05:21:03.627501 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-04-09 05:21:03.627511 | orchestrator | Thursday 09 April 2026 05:20:57 +0000 (0:00:10.325) 0:09:59.275 ******** 2026-04-09 05:21:03.627522 | orchestrator | changed: [testbed-node-0] 2026-04-09 05:21:03.627533 | orchestrator | 2026-04-09 05:21:03.627544 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 05:21:03.627554 | orchestrator | Thursday 09 April 2026 05:20:59 +0000 (0:00:02.563) 0:10:01.839 ******** 2026-04-09 05:21:03.627565 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 05:21:03.627576 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-09 05:21:03.627587 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-09 05:21:03.627598 | orchestrator | 2026-04-09 05:21:03.627609 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 05:21:03.627619 | orchestrator | Thursday 09 April 2026 05:21:02 +0000 (0:00:02.240) 0:10:04.080 ******** 2026-04-09 05:21:03.627630 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 05:21:03.627641 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 05:21:03.627652 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 05:21:03.627663 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:21:03.627674 | orchestrator | 2026-04-09 05:21:03.627685 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-04-09 05:21:03.627704 | orchestrator | Thursday 09 April 2026 05:21:03 +0000 (0:00:01.403) 0:10:05.483 ******** 2026-04-09 05:21:31.823606 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:21:31.823719 | orchestrator | 2026-04-09 05:21:31.823734 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-04-09 05:21:31.823747 | orchestrator | Thursday 09 April 2026 05:21:04 +0000 (0:00:01.143) 0:10:06.627 ******** 2026-04-09 05:21:31.823759 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:21:31.823772 | orchestrator | 2026-04-09 05:21:31.823783 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-04-09 05:21:31.823794 | orchestrator | 2026-04-09 05:21:31.823805 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-04-09 05:21:31.823816 | orchestrator | Thursday 09 April 2026 05:21:06 +0000 (0:00:02.177) 0:10:08.804 ******** 2026-04-09 05:21:31.823827 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:21:31.823838 | orchestrator | 2026-04-09 05:21:31.823850 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-04-09 05:21:31.823861 | orchestrator | Thursday 09 April 2026 05:21:08 +0000 (0:00:01.118) 0:10:09.922 ******** 2026-04-09 05:21:31.823871 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:21:31.823882 | orchestrator | 2026-04-09 05:21:31.823894 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-04-09 05:21:31.823904 | orchestrator | Thursday 09 April 2026 05:21:08 +0000 (0:00:00.837) 0:10:10.760 ******** 2026-04-09 05:21:31.823916 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:21:31.823927 | orchestrator | 2026-04-09 05:21:31.823938 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-04-09 05:21:31.823949 | orchestrator | Thursday 09 April 2026 05:21:09 +0000 (0:00:00.758) 0:10:11.519 ******** 2026-04-09 05:21:31.823960 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:21:31.823970 | orchestrator | 2026-04-09 05:21:31.823982 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 05:21:31.823993 | orchestrator | Thursday 09 April 2026 05:21:10 +0000 (0:00:00.854) 0:10:12.374 ******** 2026-04-09 05:21:31.824004 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-04-09 05:21:31.824102 | orchestrator | 2026-04-09 05:21:31.824116 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-09 05:21:31.824127 | orchestrator | Thursday 09 April 2026 05:21:11 +0000 (0:00:01.105) 0:10:13.480 ******** 2026-04-09 05:21:31.824138 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:21:31.824151 | orchestrator | 2026-04-09 05:21:31.824163 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-09 05:21:31.824176 | orchestrator | Thursday 09 April 2026 05:21:13 +0000 (0:00:01.452) 0:10:14.933 ******** 2026-04-09 05:21:31.824189 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:21:31.824202 | orchestrator | 2026-04-09 05:21:31.824214 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 05:21:31.824228 | orchestrator | Thursday 09 April 2026 05:21:14 +0000 (0:00:01.114) 0:10:16.047 ******** 2026-04-09 05:21:31.824241 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:21:31.824253 | orchestrator | 2026-04-09 05:21:31.824266 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 05:21:31.824280 | orchestrator | Thursday 09 April 2026 05:21:15 +0000 (0:00:01.517) 0:10:17.564 ******** 2026-04-09 05:21:31.824291 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:21:31.824302 | orchestrator | 2026-04-09 05:21:31.824313 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-09 05:21:31.824339 | orchestrator | Thursday 09 April 2026 05:21:16 +0000 (0:00:01.136) 0:10:18.700 ******** 2026-04-09 05:21:31.824350 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:21:31.824361 | orchestrator | 2026-04-09 05:21:31.824372 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-09 05:21:31.824383 | orchestrator | Thursday 09 April 2026 05:21:17 +0000 (0:00:01.126) 0:10:19.827 ******** 2026-04-09 05:21:31.824394 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:21:31.824405 | orchestrator | 2026-04-09 05:21:31.824416 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-09 05:21:31.824427 | orchestrator | Thursday 09 April 2026 05:21:19 +0000 (0:00:01.178) 0:10:21.005 ******** 2026-04-09 05:21:31.824437 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:21:31.824448 | orchestrator | 2026-04-09 05:21:31.824459 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-09 05:21:31.824470 | orchestrator | Thursday 09 April 2026 05:21:20 +0000 (0:00:01.131) 0:10:22.137 ******** 2026-04-09 05:21:31.824481 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:21:31.824492 | orchestrator | 2026-04-09 05:21:31.824503 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-09 05:21:31.824514 | orchestrator | Thursday 09 April 2026 05:21:21 +0000 (0:00:01.165) 0:10:23.302 ******** 2026-04-09 05:21:31.824525 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:21:31.824536 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-09 05:21:31.824547 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:21:31.824558 | orchestrator | 2026-04-09 05:21:31.824569 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-09 05:21:31.824580 | orchestrator | Thursday 09 April 2026 05:21:23 +0000 (0:00:01.728) 0:10:25.031 ******** 2026-04-09 05:21:31.824591 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:21:31.824602 | orchestrator | 2026-04-09 05:21:31.824612 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-09 05:21:31.824623 | orchestrator | Thursday 09 April 2026 05:21:24 +0000 (0:00:01.305) 0:10:26.337 ******** 2026-04-09 05:21:31.824634 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:21:31.824645 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-09 05:21:31.824656 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:21:31.824667 | orchestrator | 2026-04-09 05:21:31.824678 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-09 05:21:31.824696 | orchestrator | Thursday 09 April 2026 05:21:27 +0000 (0:00:03.000) 0:10:29.338 ******** 2026-04-09 05:21:31.824724 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-09 05:21:31.824737 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-09 05:21:31.824749 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-09 05:21:31.824760 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:21:31.824770 | orchestrator | 2026-04-09 05:21:31.824782 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-09 05:21:31.824793 | orchestrator | Thursday 09 April 2026 05:21:28 +0000 (0:00:01.407) 0:10:30.745 ******** 2026-04-09 05:21:31.824805 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 05:21:31.824819 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 05:21:31.824831 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 05:21:31.824854 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:21:31.824866 | orchestrator | 2026-04-09 05:21:31.824877 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-09 05:21:31.824888 | orchestrator | Thursday 09 April 2026 05:21:30 +0000 (0:00:01.664) 0:10:32.410 ******** 2026-04-09 05:21:31.824902 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:21:31.824916 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:21:31.824933 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:21:31.824945 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:21:31.824956 | orchestrator | 2026-04-09 05:21:31.824968 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-09 05:21:31.824979 | orchestrator | Thursday 09 April 2026 05:21:31 +0000 (0:00:01.175) 0:10:33.586 ******** 2026-04-09 05:21:31.824992 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '69d38aa54653', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-09 05:21:25.002130', 'end': '2026-04-09 05:21:25.065492', 'delta': '0:00:00.063362', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['69d38aa54653'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-09 05:21:31.825021 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '344b9fc03006', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-09 05:21:25.628133', 'end': '2026-04-09 05:21:25.677793', 'delta': '0:00:00.049660', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['344b9fc03006'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-09 05:21:50.171920 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '66330ed4242e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-09 05:21:26.192552', 'end': '2026-04-09 05:21:26.241967', 'delta': '0:00:00.049415', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['66330ed4242e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-09 05:21:50.172090 | orchestrator | 2026-04-09 05:21:50.172106 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-09 05:21:50.172117 | orchestrator | Thursday 09 April 2026 05:21:32 +0000 (0:00:01.210) 0:10:34.796 ******** 2026-04-09 05:21:50.172127 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:21:50.172137 | orchestrator | 2026-04-09 05:21:50.172146 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-09 05:21:50.172155 | orchestrator | Thursday 09 April 2026 05:21:34 +0000 (0:00:01.239) 0:10:36.036 ******** 2026-04-09 05:21:50.172165 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:21:50.172174 | orchestrator | 2026-04-09 05:21:50.172183 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-09 05:21:50.172192 | orchestrator | Thursday 09 April 2026 05:21:35 +0000 (0:00:01.223) 0:10:37.259 ******** 2026-04-09 05:21:50.172201 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:21:50.172210 | orchestrator | 2026-04-09 05:21:50.172219 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-09 05:21:50.172228 | orchestrator | Thursday 09 April 2026 05:21:36 +0000 (0:00:01.096) 0:10:38.356 ******** 2026-04-09 05:21:50.172237 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-04-09 05:21:50.172246 | orchestrator | 2026-04-09 05:21:50.172255 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 05:21:50.172264 | orchestrator | Thursday 09 April 2026 05:21:38 +0000 (0:00:01.959) 0:10:40.315 ******** 2026-04-09 05:21:50.172274 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:21:50.172283 | orchestrator | 2026-04-09 05:21:50.172292 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-09 05:21:50.172301 | orchestrator | Thursday 09 April 2026 05:21:39 +0000 (0:00:01.175) 0:10:41.490 ******** 2026-04-09 05:21:50.172310 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:21:50.172319 | orchestrator | 2026-04-09 05:21:50.172345 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-09 05:21:50.172355 | orchestrator | Thursday 09 April 2026 05:21:40 +0000 (0:00:01.182) 0:10:42.673 ******** 2026-04-09 05:21:50.172364 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:21:50.172373 | orchestrator | 2026-04-09 05:21:50.172382 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 05:21:50.172411 | orchestrator | Thursday 09 April 2026 05:21:42 +0000 (0:00:01.234) 0:10:43.908 ******** 2026-04-09 05:21:50.172421 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:21:50.172430 | orchestrator | 2026-04-09 05:21:50.172441 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-09 05:21:50.172452 | orchestrator | Thursday 09 April 2026 05:21:43 +0000 (0:00:01.086) 0:10:44.994 ******** 2026-04-09 05:21:50.172462 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:21:50.172473 | orchestrator | 2026-04-09 05:21:50.172483 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-09 05:21:50.172493 | orchestrator | Thursday 09 April 2026 05:21:44 +0000 (0:00:01.157) 0:10:46.152 ******** 2026-04-09 05:21:50.172504 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:21:50.172514 | orchestrator | 2026-04-09 05:21:50.172524 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-09 05:21:50.172534 | orchestrator | Thursday 09 April 2026 05:21:45 +0000 (0:00:01.185) 0:10:47.338 ******** 2026-04-09 05:21:50.172544 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:21:50.172555 | orchestrator | 2026-04-09 05:21:50.172566 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-09 05:21:50.172575 | orchestrator | Thursday 09 April 2026 05:21:46 +0000 (0:00:01.143) 0:10:48.481 ******** 2026-04-09 05:21:50.172586 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:21:50.172596 | orchestrator | 2026-04-09 05:21:50.172606 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-09 05:21:50.172616 | orchestrator | Thursday 09 April 2026 05:21:47 +0000 (0:00:01.152) 0:10:49.633 ******** 2026-04-09 05:21:50.172626 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:21:50.172636 | orchestrator | 2026-04-09 05:21:50.172647 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-09 05:21:50.172658 | orchestrator | Thursday 09 April 2026 05:21:48 +0000 (0:00:01.122) 0:10:50.756 ******** 2026-04-09 05:21:50.172668 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:21:50.172678 | orchestrator | 2026-04-09 05:21:50.172688 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-09 05:21:50.172698 | orchestrator | Thursday 09 April 2026 05:21:50 +0000 (0:00:01.134) 0:10:51.891 ******** 2026-04-09 05:21:50.172728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:21:50.172744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:21:50.172755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:21:50.172767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-09 05:21:50.172784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:21:50.172799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:21:50.172808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:21:50.172830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '482e14db', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part16', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part14', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part15', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part1', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 05:21:51.430525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:21:51.430693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:21:51.430710 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:21:51.430725 | orchestrator | 2026-04-09 05:21:51.430738 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-09 05:21:51.430751 | orchestrator | Thursday 09 April 2026 05:21:51 +0000 (0:00:01.263) 0:10:53.154 ******** 2026-04-09 05:21:51.430784 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:21:51.430799 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:21:51.430811 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:21:51.430823 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:21:51.430858 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:21:51.430878 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:21:51.430896 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:21:51.430911 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '482e14db', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part16', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part14', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part15', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part1', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:21:51.430934 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:22:26.934827 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:22:26.934953 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:22:26.935078 | orchestrator | 2026-04-09 05:22:26.935092 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-09 05:22:26.935105 | orchestrator | Thursday 09 April 2026 05:21:52 +0000 (0:00:01.248) 0:10:54.403 ******** 2026-04-09 05:22:26.935116 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:22:26.935129 | orchestrator | 2026-04-09 05:22:26.935157 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-09 05:22:26.935169 | orchestrator | Thursday 09 April 2026 05:21:54 +0000 (0:00:01.489) 0:10:55.893 ******** 2026-04-09 05:22:26.935180 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:22:26.935191 | orchestrator | 2026-04-09 05:22:26.935202 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 05:22:26.935233 | orchestrator | Thursday 09 April 2026 05:21:55 +0000 (0:00:01.104) 0:10:56.997 ******** 2026-04-09 05:22:26.935246 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:22:26.935256 | orchestrator | 2026-04-09 05:22:26.935268 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 05:22:26.935279 | orchestrator | Thursday 09 April 2026 05:21:56 +0000 (0:00:01.470) 0:10:58.468 ******** 2026-04-09 05:22:26.935290 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:22:26.935301 | orchestrator | 2026-04-09 05:22:26.935312 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 05:22:26.935323 | orchestrator | Thursday 09 April 2026 05:21:57 +0000 (0:00:01.126) 0:10:59.594 ******** 2026-04-09 05:22:26.935335 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:22:26.935346 | orchestrator | 2026-04-09 05:22:26.935359 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 05:22:26.935372 | orchestrator | Thursday 09 April 2026 05:21:59 +0000 (0:00:01.352) 0:11:00.947 ******** 2026-04-09 05:22:26.935386 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:22:26.935399 | orchestrator | 2026-04-09 05:22:26.935412 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 05:22:26.935426 | orchestrator | Thursday 09 April 2026 05:22:00 +0000 (0:00:01.143) 0:11:02.091 ******** 2026-04-09 05:22:26.935440 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-09 05:22:26.935453 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-09 05:22:26.935467 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-09 05:22:26.935480 | orchestrator | 2026-04-09 05:22:26.935492 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 05:22:26.935505 | orchestrator | Thursday 09 April 2026 05:22:01 +0000 (0:00:01.687) 0:11:03.778 ******** 2026-04-09 05:22:26.935518 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-09 05:22:26.935532 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-09 05:22:26.935544 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-09 05:22:26.935557 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:22:26.935593 | orchestrator | 2026-04-09 05:22:26.935606 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-09 05:22:26.935619 | orchestrator | Thursday 09 April 2026 05:22:03 +0000 (0:00:01.266) 0:11:05.044 ******** 2026-04-09 05:22:26.935630 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:22:26.935641 | orchestrator | 2026-04-09 05:22:26.935652 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-09 05:22:26.935663 | orchestrator | Thursday 09 April 2026 05:22:04 +0000 (0:00:01.166) 0:11:06.211 ******** 2026-04-09 05:22:26.935673 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:22:26.935685 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-09 05:22:26.935696 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:22:26.935707 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 05:22:26.935718 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 05:22:26.935728 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 05:22:26.935739 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 05:22:26.935750 | orchestrator | 2026-04-09 05:22:26.935761 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-09 05:22:26.935772 | orchestrator | Thursday 09 April 2026 05:22:06 +0000 (0:00:02.207) 0:11:08.418 ******** 2026-04-09 05:22:26.935783 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:22:26.935793 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-09 05:22:26.935804 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:22:26.935815 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 05:22:26.935843 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 05:22:26.935855 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 05:22:26.935866 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 05:22:26.935877 | orchestrator | 2026-04-09 05:22:26.935888 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-04-09 05:22:26.935899 | orchestrator | Thursday 09 April 2026 05:22:08 +0000 (0:00:02.222) 0:11:10.640 ******** 2026-04-09 05:22:26.935909 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:22:26.935920 | orchestrator | 2026-04-09 05:22:26.935931 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-04-09 05:22:26.935942 | orchestrator | Thursday 09 April 2026 05:22:09 +0000 (0:00:00.919) 0:11:11.560 ******** 2026-04-09 05:22:26.935953 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:22:26.935982 | orchestrator | 2026-04-09 05:22:26.935994 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-04-09 05:22:26.936005 | orchestrator | Thursday 09 April 2026 05:22:10 +0000 (0:00:00.916) 0:11:12.476 ******** 2026-04-09 05:22:26.936016 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:22:26.936027 | orchestrator | 2026-04-09 05:22:26.936038 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-04-09 05:22:26.936066 | orchestrator | Thursday 09 April 2026 05:22:11 +0000 (0:00:00.773) 0:11:13.250 ******** 2026-04-09 05:22:26.936078 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:22:26.936088 | orchestrator | 2026-04-09 05:22:26.936099 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-04-09 05:22:26.936110 | orchestrator | Thursday 09 April 2026 05:22:12 +0000 (0:00:00.857) 0:11:14.108 ******** 2026-04-09 05:22:26.936121 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:22:26.936132 | orchestrator | 2026-04-09 05:22:26.936143 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-04-09 05:22:26.936162 | orchestrator | Thursday 09 April 2026 05:22:13 +0000 (0:00:00.822) 0:11:14.931 ******** 2026-04-09 05:22:26.936174 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-09 05:22:26.936185 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-09 05:22:26.936196 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-09 05:22:26.936207 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:22:26.936218 | orchestrator | 2026-04-09 05:22:26.936229 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-04-09 05:22:26.936240 | orchestrator | Thursday 09 April 2026 05:22:14 +0000 (0:00:01.086) 0:11:16.018 ******** 2026-04-09 05:22:26.936251 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-04-09 05:22:26.936262 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-04-09 05:22:26.936273 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-04-09 05:22:26.936284 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-04-09 05:22:26.936295 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-04-09 05:22:26.936306 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-04-09 05:22:26.936317 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:22:26.936328 | orchestrator | 2026-04-09 05:22:26.936339 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-04-09 05:22:26.936350 | orchestrator | Thursday 09 April 2026 05:22:15 +0000 (0:00:01.405) 0:11:17.424 ******** 2026-04-09 05:22:26.936361 | orchestrator | changed: [testbed-node-1] => (item=testbed-node-1) 2026-04-09 05:22:26.936373 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-09 05:22:26.936383 | orchestrator | 2026-04-09 05:22:26.936394 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-04-09 05:22:26.936405 | orchestrator | Thursday 09 April 2026 05:22:18 +0000 (0:00:03.083) 0:11:20.507 ******** 2026-04-09 05:22:26.936416 | orchestrator | changed: [testbed-node-1] 2026-04-09 05:22:26.936427 | orchestrator | 2026-04-09 05:22:26.936438 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 05:22:26.936449 | orchestrator | Thursday 09 April 2026 05:22:20 +0000 (0:00:02.056) 0:11:22.563 ******** 2026-04-09 05:22:26.936460 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-04-09 05:22:26.936473 | orchestrator | 2026-04-09 05:22:26.936484 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 05:22:26.936495 | orchestrator | Thursday 09 April 2026 05:22:21 +0000 (0:00:01.148) 0:11:23.712 ******** 2026-04-09 05:22:26.936506 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-04-09 05:22:26.936517 | orchestrator | 2026-04-09 05:22:26.936528 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 05:22:26.936539 | orchestrator | Thursday 09 April 2026 05:22:22 +0000 (0:00:01.133) 0:11:24.846 ******** 2026-04-09 05:22:26.936550 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:22:26.936561 | orchestrator | 2026-04-09 05:22:26.936572 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 05:22:26.936583 | orchestrator | Thursday 09 April 2026 05:22:24 +0000 (0:00:01.573) 0:11:26.419 ******** 2026-04-09 05:22:26.936594 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:22:26.936604 | orchestrator | 2026-04-09 05:22:26.936615 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 05:22:26.936626 | orchestrator | Thursday 09 April 2026 05:22:25 +0000 (0:00:01.193) 0:11:27.612 ******** 2026-04-09 05:22:26.936637 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:22:26.936648 | orchestrator | 2026-04-09 05:22:26.936659 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 05:22:26.936683 | orchestrator | Thursday 09 April 2026 05:22:26 +0000 (0:00:01.181) 0:11:28.794 ******** 2026-04-09 05:23:09.087962 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.088076 | orchestrator | 2026-04-09 05:23:09.088092 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 05:23:09.088104 | orchestrator | Thursday 09 April 2026 05:22:28 +0000 (0:00:01.174) 0:11:29.968 ******** 2026-04-09 05:23:09.088115 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:23:09.088126 | orchestrator | 2026-04-09 05:23:09.088137 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 05:23:09.088147 | orchestrator | Thursday 09 April 2026 05:22:29 +0000 (0:00:01.580) 0:11:31.549 ******** 2026-04-09 05:23:09.088157 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.088167 | orchestrator | 2026-04-09 05:23:09.088177 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 05:23:09.088187 | orchestrator | Thursday 09 April 2026 05:22:30 +0000 (0:00:01.148) 0:11:32.697 ******** 2026-04-09 05:23:09.088197 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.088207 | orchestrator | 2026-04-09 05:23:09.088217 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 05:23:09.088227 | orchestrator | Thursday 09 April 2026 05:22:31 +0000 (0:00:01.129) 0:11:33.827 ******** 2026-04-09 05:23:09.088244 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:23:09.088261 | orchestrator | 2026-04-09 05:23:09.088277 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 05:23:09.088312 | orchestrator | Thursday 09 April 2026 05:22:33 +0000 (0:00:01.616) 0:11:35.444 ******** 2026-04-09 05:23:09.088328 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:23:09.088345 | orchestrator | 2026-04-09 05:23:09.088362 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 05:23:09.088378 | orchestrator | Thursday 09 April 2026 05:22:35 +0000 (0:00:01.548) 0:11:36.992 ******** 2026-04-09 05:23:09.088393 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.088410 | orchestrator | 2026-04-09 05:23:09.088426 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 05:23:09.088443 | orchestrator | Thursday 09 April 2026 05:22:35 +0000 (0:00:00.794) 0:11:37.787 ******** 2026-04-09 05:23:09.088459 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:23:09.088476 | orchestrator | 2026-04-09 05:23:09.088493 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 05:23:09.088509 | orchestrator | Thursday 09 April 2026 05:22:36 +0000 (0:00:00.805) 0:11:38.593 ******** 2026-04-09 05:23:09.088526 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.088543 | orchestrator | 2026-04-09 05:23:09.088559 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 05:23:09.088577 | orchestrator | Thursday 09 April 2026 05:22:37 +0000 (0:00:00.758) 0:11:39.351 ******** 2026-04-09 05:23:09.088588 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.088598 | orchestrator | 2026-04-09 05:23:09.088608 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 05:23:09.088617 | orchestrator | Thursday 09 April 2026 05:22:38 +0000 (0:00:00.762) 0:11:40.114 ******** 2026-04-09 05:23:09.088627 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.088637 | orchestrator | 2026-04-09 05:23:09.088647 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 05:23:09.088656 | orchestrator | Thursday 09 April 2026 05:22:39 +0000 (0:00:00.774) 0:11:40.888 ******** 2026-04-09 05:23:09.088666 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.088676 | orchestrator | 2026-04-09 05:23:09.088686 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 05:23:09.088695 | orchestrator | Thursday 09 April 2026 05:22:39 +0000 (0:00:00.753) 0:11:41.642 ******** 2026-04-09 05:23:09.088705 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.088715 | orchestrator | 2026-04-09 05:23:09.088725 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 05:23:09.088757 | orchestrator | Thursday 09 April 2026 05:22:40 +0000 (0:00:00.850) 0:11:42.493 ******** 2026-04-09 05:23:09.088767 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:23:09.088776 | orchestrator | 2026-04-09 05:23:09.088786 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 05:23:09.088796 | orchestrator | Thursday 09 April 2026 05:22:41 +0000 (0:00:00.837) 0:11:43.330 ******** 2026-04-09 05:23:09.088806 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:23:09.088816 | orchestrator | 2026-04-09 05:23:09.088826 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 05:23:09.088836 | orchestrator | Thursday 09 April 2026 05:22:42 +0000 (0:00:00.847) 0:11:44.178 ******** 2026-04-09 05:23:09.088845 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:23:09.088855 | orchestrator | 2026-04-09 05:23:09.088865 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-09 05:23:09.088874 | orchestrator | Thursday 09 April 2026 05:22:43 +0000 (0:00:00.784) 0:11:44.962 ******** 2026-04-09 05:23:09.088884 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.088893 | orchestrator | 2026-04-09 05:23:09.088903 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-09 05:23:09.088945 | orchestrator | Thursday 09 April 2026 05:22:43 +0000 (0:00:00.824) 0:11:45.787 ******** 2026-04-09 05:23:09.088958 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.088968 | orchestrator | 2026-04-09 05:23:09.088978 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-09 05:23:09.088988 | orchestrator | Thursday 09 April 2026 05:22:44 +0000 (0:00:00.765) 0:11:46.552 ******** 2026-04-09 05:23:09.088997 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.089007 | orchestrator | 2026-04-09 05:23:09.089016 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-09 05:23:09.089026 | orchestrator | Thursday 09 April 2026 05:22:45 +0000 (0:00:00.764) 0:11:47.317 ******** 2026-04-09 05:23:09.089036 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.089045 | orchestrator | 2026-04-09 05:23:09.089055 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-09 05:23:09.089065 | orchestrator | Thursday 09 April 2026 05:22:46 +0000 (0:00:00.751) 0:11:48.068 ******** 2026-04-09 05:23:09.089075 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.089084 | orchestrator | 2026-04-09 05:23:09.089114 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-09 05:23:09.089125 | orchestrator | Thursday 09 April 2026 05:22:46 +0000 (0:00:00.772) 0:11:48.841 ******** 2026-04-09 05:23:09.089135 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.089145 | orchestrator | 2026-04-09 05:23:09.089155 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-09 05:23:09.089164 | orchestrator | Thursday 09 April 2026 05:22:47 +0000 (0:00:00.778) 0:11:49.619 ******** 2026-04-09 05:23:09.089174 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.089184 | orchestrator | 2026-04-09 05:23:09.089194 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-09 05:23:09.089204 | orchestrator | Thursday 09 April 2026 05:22:48 +0000 (0:00:00.758) 0:11:50.377 ******** 2026-04-09 05:23:09.089214 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.089224 | orchestrator | 2026-04-09 05:23:09.089233 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-09 05:23:09.089243 | orchestrator | Thursday 09 April 2026 05:22:49 +0000 (0:00:00.794) 0:11:51.172 ******** 2026-04-09 05:23:09.089253 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.089263 | orchestrator | 2026-04-09 05:23:09.089272 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-09 05:23:09.089282 | orchestrator | Thursday 09 April 2026 05:22:50 +0000 (0:00:00.786) 0:11:51.958 ******** 2026-04-09 05:23:09.089299 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.089309 | orchestrator | 2026-04-09 05:23:09.089319 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-09 05:23:09.089336 | orchestrator | Thursday 09 April 2026 05:22:50 +0000 (0:00:00.809) 0:11:52.768 ******** 2026-04-09 05:23:09.089346 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.089356 | orchestrator | 2026-04-09 05:23:09.089366 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-09 05:23:09.089375 | orchestrator | Thursday 09 April 2026 05:22:51 +0000 (0:00:00.837) 0:11:53.605 ******** 2026-04-09 05:23:09.089387 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.089404 | orchestrator | 2026-04-09 05:23:09.089421 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-09 05:23:09.089438 | orchestrator | Thursday 09 April 2026 05:22:52 +0000 (0:00:00.764) 0:11:54.370 ******** 2026-04-09 05:23:09.089454 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:23:09.089471 | orchestrator | 2026-04-09 05:23:09.089488 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-09 05:23:09.089506 | orchestrator | Thursday 09 April 2026 05:22:54 +0000 (0:00:01.727) 0:11:56.097 ******** 2026-04-09 05:23:09.089524 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:23:09.089540 | orchestrator | 2026-04-09 05:23:09.089557 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-09 05:23:09.089574 | orchestrator | Thursday 09 April 2026 05:22:56 +0000 (0:00:02.072) 0:11:58.170 ******** 2026-04-09 05:23:09.089591 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-04-09 05:23:09.089610 | orchestrator | 2026-04-09 05:23:09.089628 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-09 05:23:09.089646 | orchestrator | Thursday 09 April 2026 05:22:57 +0000 (0:00:01.160) 0:11:59.331 ******** 2026-04-09 05:23:09.089662 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.089676 | orchestrator | 2026-04-09 05:23:09.089686 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-09 05:23:09.089696 | orchestrator | Thursday 09 April 2026 05:22:58 +0000 (0:00:01.185) 0:12:00.516 ******** 2026-04-09 05:23:09.089706 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.089715 | orchestrator | 2026-04-09 05:23:09.089725 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-09 05:23:09.089735 | orchestrator | Thursday 09 April 2026 05:22:59 +0000 (0:00:01.140) 0:12:01.657 ******** 2026-04-09 05:23:09.089745 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 05:23:09.089755 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 05:23:09.089764 | orchestrator | 2026-04-09 05:23:09.089774 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-09 05:23:09.089784 | orchestrator | Thursday 09 April 2026 05:23:01 +0000 (0:00:01.843) 0:12:03.501 ******** 2026-04-09 05:23:09.089793 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:23:09.089803 | orchestrator | 2026-04-09 05:23:09.089813 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-09 05:23:09.089823 | orchestrator | Thursday 09 April 2026 05:23:03 +0000 (0:00:01.505) 0:12:05.006 ******** 2026-04-09 05:23:09.089833 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.089842 | orchestrator | 2026-04-09 05:23:09.089852 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-09 05:23:09.089862 | orchestrator | Thursday 09 April 2026 05:23:04 +0000 (0:00:01.188) 0:12:06.194 ******** 2026-04-09 05:23:09.089872 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.089882 | orchestrator | 2026-04-09 05:23:09.089891 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-09 05:23:09.089901 | orchestrator | Thursday 09 April 2026 05:23:05 +0000 (0:00:00.763) 0:12:06.958 ******** 2026-04-09 05:23:09.089911 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:09.089958 | orchestrator | 2026-04-09 05:23:09.089969 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-09 05:23:09.089999 | orchestrator | Thursday 09 April 2026 05:23:05 +0000 (0:00:00.785) 0:12:07.744 ******** 2026-04-09 05:23:09.090008 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-04-09 05:23:09.090075 | orchestrator | 2026-04-09 05:23:09.090087 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-09 05:23:09.090097 | orchestrator | Thursday 09 April 2026 05:23:07 +0000 (0:00:01.268) 0:12:09.013 ******** 2026-04-09 05:23:09.090106 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:23:09.090116 | orchestrator | 2026-04-09 05:23:09.090127 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-09 05:23:09.090148 | orchestrator | Thursday 09 April 2026 05:23:09 +0000 (0:00:01.933) 0:12:10.946 ******** 2026-04-09 05:23:48.657815 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 05:23:48.657970 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 05:23:48.657988 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 05:23:48.658002 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.658015 | orchestrator | 2026-04-09 05:23:48.658096 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-09 05:23:48.658107 | orchestrator | Thursday 09 April 2026 05:23:10 +0000 (0:00:01.143) 0:12:12.090 ******** 2026-04-09 05:23:48.658118 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.658129 | orchestrator | 2026-04-09 05:23:48.658141 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-09 05:23:48.658152 | orchestrator | Thursday 09 April 2026 05:23:11 +0000 (0:00:01.146) 0:12:13.236 ******** 2026-04-09 05:23:48.658163 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.658174 | orchestrator | 2026-04-09 05:23:48.658185 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-09 05:23:48.658196 | orchestrator | Thursday 09 April 2026 05:23:12 +0000 (0:00:01.202) 0:12:14.438 ******** 2026-04-09 05:23:48.658222 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.658236 | orchestrator | 2026-04-09 05:23:48.658255 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-09 05:23:48.658273 | orchestrator | Thursday 09 April 2026 05:23:13 +0000 (0:00:01.132) 0:12:15.570 ******** 2026-04-09 05:23:48.658290 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.658308 | orchestrator | 2026-04-09 05:23:48.658326 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-09 05:23:48.658343 | orchestrator | Thursday 09 April 2026 05:23:14 +0000 (0:00:01.139) 0:12:16.710 ******** 2026-04-09 05:23:48.658361 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.658379 | orchestrator | 2026-04-09 05:23:48.658398 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-09 05:23:48.658417 | orchestrator | Thursday 09 April 2026 05:23:15 +0000 (0:00:00.803) 0:12:17.514 ******** 2026-04-09 05:23:48.658436 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:23:48.658457 | orchestrator | 2026-04-09 05:23:48.658477 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-09 05:23:48.658495 | orchestrator | Thursday 09 April 2026 05:23:17 +0000 (0:00:02.300) 0:12:19.814 ******** 2026-04-09 05:23:48.658509 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:23:48.658522 | orchestrator | 2026-04-09 05:23:48.658535 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-09 05:23:48.658549 | orchestrator | Thursday 09 April 2026 05:23:18 +0000 (0:00:00.782) 0:12:20.596 ******** 2026-04-09 05:23:48.658562 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-04-09 05:23:48.658574 | orchestrator | 2026-04-09 05:23:48.658588 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-09 05:23:48.658601 | orchestrator | Thursday 09 April 2026 05:23:19 +0000 (0:00:01.103) 0:12:21.700 ******** 2026-04-09 05:23:48.658614 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.658653 | orchestrator | 2026-04-09 05:23:48.658666 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-09 05:23:48.658678 | orchestrator | Thursday 09 April 2026 05:23:21 +0000 (0:00:01.207) 0:12:22.907 ******** 2026-04-09 05:23:48.658692 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.658704 | orchestrator | 2026-04-09 05:23:48.658715 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-09 05:23:48.658726 | orchestrator | Thursday 09 April 2026 05:23:22 +0000 (0:00:01.210) 0:12:24.118 ******** 2026-04-09 05:23:48.658737 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.658747 | orchestrator | 2026-04-09 05:23:48.658758 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-09 05:23:48.658768 | orchestrator | Thursday 09 April 2026 05:23:23 +0000 (0:00:01.113) 0:12:25.231 ******** 2026-04-09 05:23:48.658779 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.658789 | orchestrator | 2026-04-09 05:23:48.658800 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-09 05:23:48.658810 | orchestrator | Thursday 09 April 2026 05:23:24 +0000 (0:00:01.154) 0:12:26.386 ******** 2026-04-09 05:23:48.658821 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.658832 | orchestrator | 2026-04-09 05:23:48.658842 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-09 05:23:48.658853 | orchestrator | Thursday 09 April 2026 05:23:25 +0000 (0:00:01.158) 0:12:27.545 ******** 2026-04-09 05:23:48.658863 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.658901 | orchestrator | 2026-04-09 05:23:48.658919 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-09 05:23:48.658930 | orchestrator | Thursday 09 April 2026 05:23:26 +0000 (0:00:01.203) 0:12:28.748 ******** 2026-04-09 05:23:48.658941 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.658952 | orchestrator | 2026-04-09 05:23:48.658963 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-09 05:23:48.658973 | orchestrator | Thursday 09 April 2026 05:23:28 +0000 (0:00:01.163) 0:12:29.912 ******** 2026-04-09 05:23:48.658984 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.658995 | orchestrator | 2026-04-09 05:23:48.659005 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-09 05:23:48.659016 | orchestrator | Thursday 09 April 2026 05:23:29 +0000 (0:00:01.188) 0:12:31.100 ******** 2026-04-09 05:23:48.659026 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:23:48.659037 | orchestrator | 2026-04-09 05:23:48.659048 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-09 05:23:48.659058 | orchestrator | Thursday 09 April 2026 05:23:30 +0000 (0:00:00.874) 0:12:31.975 ******** 2026-04-09 05:23:48.659069 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-04-09 05:23:48.659081 | orchestrator | 2026-04-09 05:23:48.659092 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-09 05:23:48.659122 | orchestrator | Thursday 09 April 2026 05:23:31 +0000 (0:00:01.124) 0:12:33.099 ******** 2026-04-09 05:23:48.659134 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-04-09 05:23:48.659146 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-09 05:23:48.659157 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-09 05:23:48.659167 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-09 05:23:48.659178 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-09 05:23:48.659189 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-09 05:23:48.659200 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-09 05:23:48.659210 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-09 05:23:48.659221 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 05:23:48.659232 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 05:23:48.659253 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 05:23:48.659264 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 05:23:48.659283 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 05:23:48.659294 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 05:23:48.659305 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-04-09 05:23:48.659316 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-04-09 05:23:48.659327 | orchestrator | 2026-04-09 05:23:48.659337 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-09 05:23:48.659348 | orchestrator | Thursday 09 April 2026 05:23:37 +0000 (0:00:06.297) 0:12:39.397 ******** 2026-04-09 05:23:48.659359 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.659370 | orchestrator | 2026-04-09 05:23:48.659381 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-09 05:23:48.659391 | orchestrator | Thursday 09 April 2026 05:23:38 +0000 (0:00:00.766) 0:12:40.163 ******** 2026-04-09 05:23:48.659402 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.659412 | orchestrator | 2026-04-09 05:23:48.659423 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-09 05:23:48.659434 | orchestrator | Thursday 09 April 2026 05:23:39 +0000 (0:00:00.790) 0:12:40.953 ******** 2026-04-09 05:23:48.659445 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.659456 | orchestrator | 2026-04-09 05:23:48.659466 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-09 05:23:48.659477 | orchestrator | Thursday 09 April 2026 05:23:39 +0000 (0:00:00.782) 0:12:41.735 ******** 2026-04-09 05:23:48.659488 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.659499 | orchestrator | 2026-04-09 05:23:48.659510 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-09 05:23:48.659520 | orchestrator | Thursday 09 April 2026 05:23:40 +0000 (0:00:00.809) 0:12:42.545 ******** 2026-04-09 05:23:48.659531 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.659542 | orchestrator | 2026-04-09 05:23:48.659552 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-09 05:23:48.659563 | orchestrator | Thursday 09 April 2026 05:23:41 +0000 (0:00:00.781) 0:12:43.327 ******** 2026-04-09 05:23:48.659574 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.659585 | orchestrator | 2026-04-09 05:23:48.659596 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-09 05:23:48.659607 | orchestrator | Thursday 09 April 2026 05:23:42 +0000 (0:00:00.773) 0:12:44.101 ******** 2026-04-09 05:23:48.659617 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.659628 | orchestrator | 2026-04-09 05:23:48.659639 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-09 05:23:48.659657 | orchestrator | Thursday 09 April 2026 05:23:43 +0000 (0:00:00.773) 0:12:44.874 ******** 2026-04-09 05:23:48.659674 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.659692 | orchestrator | 2026-04-09 05:23:48.659708 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-09 05:23:48.659725 | orchestrator | Thursday 09 April 2026 05:23:43 +0000 (0:00:00.789) 0:12:45.664 ******** 2026-04-09 05:23:48.659743 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.659760 | orchestrator | 2026-04-09 05:23:48.659779 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-09 05:23:48.659798 | orchestrator | Thursday 09 April 2026 05:23:44 +0000 (0:00:00.782) 0:12:46.446 ******** 2026-04-09 05:23:48.659816 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.659834 | orchestrator | 2026-04-09 05:23:48.659854 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-09 05:23:48.659899 | orchestrator | Thursday 09 April 2026 05:23:45 +0000 (0:00:00.783) 0:12:47.230 ******** 2026-04-09 05:23:48.659920 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.659942 | orchestrator | 2026-04-09 05:23:48.659953 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-09 05:23:48.659964 | orchestrator | Thursday 09 April 2026 05:23:46 +0000 (0:00:00.791) 0:12:48.022 ******** 2026-04-09 05:23:48.659974 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.659985 | orchestrator | 2026-04-09 05:23:48.659996 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-09 05:23:48.660006 | orchestrator | Thursday 09 April 2026 05:23:46 +0000 (0:00:00.767) 0:12:48.790 ******** 2026-04-09 05:23:48.660017 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.660028 | orchestrator | 2026-04-09 05:23:48.660039 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-09 05:23:48.660049 | orchestrator | Thursday 09 April 2026 05:23:47 +0000 (0:00:00.894) 0:12:49.684 ******** 2026-04-09 05:23:48.660060 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:23:48.660071 | orchestrator | 2026-04-09 05:23:48.660082 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-09 05:23:48.660103 | orchestrator | Thursday 09 April 2026 05:23:48 +0000 (0:00:00.834) 0:12:50.519 ******** 2026-04-09 05:24:35.835104 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:24:35.835220 | orchestrator | 2026-04-09 05:24:35.835238 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-09 05:24:35.835251 | orchestrator | Thursday 09 April 2026 05:23:49 +0000 (0:00:00.874) 0:12:51.393 ******** 2026-04-09 05:24:35.835263 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:24:35.835274 | orchestrator | 2026-04-09 05:24:35.835285 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-09 05:24:35.835296 | orchestrator | Thursday 09 April 2026 05:23:50 +0000 (0:00:00.761) 0:12:52.155 ******** 2026-04-09 05:24:35.835307 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:24:35.835318 | orchestrator | 2026-04-09 05:24:35.835330 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 05:24:35.835342 | orchestrator | Thursday 09 April 2026 05:23:51 +0000 (0:00:00.750) 0:12:52.906 ******** 2026-04-09 05:24:35.835353 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:24:35.835364 | orchestrator | 2026-04-09 05:24:35.835376 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 05:24:35.835403 | orchestrator | Thursday 09 April 2026 05:23:51 +0000 (0:00:00.803) 0:12:53.709 ******** 2026-04-09 05:24:35.835414 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:24:35.835425 | orchestrator | 2026-04-09 05:24:35.835436 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 05:24:35.835447 | orchestrator | Thursday 09 April 2026 05:23:52 +0000 (0:00:00.809) 0:12:54.519 ******** 2026-04-09 05:24:35.835458 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:24:35.835469 | orchestrator | 2026-04-09 05:24:35.835480 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 05:24:35.835491 | orchestrator | Thursday 09 April 2026 05:23:53 +0000 (0:00:00.791) 0:12:55.311 ******** 2026-04-09 05:24:35.835502 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:24:35.835513 | orchestrator | 2026-04-09 05:24:35.835524 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 05:24:35.835535 | orchestrator | Thursday 09 April 2026 05:23:54 +0000 (0:00:00.784) 0:12:56.095 ******** 2026-04-09 05:24:35.835546 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-09 05:24:35.835557 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-09 05:24:35.835568 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-09 05:24:35.835579 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:24:35.835590 | orchestrator | 2026-04-09 05:24:35.835601 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 05:24:35.835612 | orchestrator | Thursday 09 April 2026 05:23:55 +0000 (0:00:01.045) 0:12:57.141 ******** 2026-04-09 05:24:35.835645 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-09 05:24:35.835660 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-09 05:24:35.835673 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-09 05:24:35.835686 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:24:35.835698 | orchestrator | 2026-04-09 05:24:35.835711 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 05:24:35.835724 | orchestrator | Thursday 09 April 2026 05:23:56 +0000 (0:00:01.042) 0:12:58.184 ******** 2026-04-09 05:24:35.835737 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-09 05:24:35.835750 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-09 05:24:35.835763 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-09 05:24:35.835776 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:24:35.835788 | orchestrator | 2026-04-09 05:24:35.835802 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 05:24:35.835815 | orchestrator | Thursday 09 April 2026 05:23:57 +0000 (0:00:01.057) 0:12:59.241 ******** 2026-04-09 05:24:35.835826 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:24:35.835862 | orchestrator | 2026-04-09 05:24:35.835873 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 05:24:35.835884 | orchestrator | Thursday 09 April 2026 05:23:58 +0000 (0:00:00.802) 0:13:00.044 ******** 2026-04-09 05:24:35.835896 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-09 05:24:35.835907 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:24:35.835918 | orchestrator | 2026-04-09 05:24:35.835929 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-09 05:24:35.835940 | orchestrator | Thursday 09 April 2026 05:23:59 +0000 (0:00:00.972) 0:13:01.016 ******** 2026-04-09 05:24:35.835951 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:24:35.835962 | orchestrator | 2026-04-09 05:24:35.835973 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-09 05:24:35.835984 | orchestrator | Thursday 09 April 2026 05:24:00 +0000 (0:00:01.434) 0:13:02.451 ******** 2026-04-09 05:24:35.835995 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:24:35.836005 | orchestrator | 2026-04-09 05:24:35.836016 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-09 05:24:35.836028 | orchestrator | Thursday 09 April 2026 05:24:01 +0000 (0:00:00.783) 0:13:03.234 ******** 2026-04-09 05:24:35.836039 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1 2026-04-09 05:24:35.836051 | orchestrator | 2026-04-09 05:24:35.836061 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-09 05:24:35.836072 | orchestrator | Thursday 09 April 2026 05:24:02 +0000 (0:00:01.195) 0:13:04.430 ******** 2026-04-09 05:24:35.836083 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-04-09 05:24:35.836094 | orchestrator | 2026-04-09 05:24:35.836105 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-09 05:24:35.836116 | orchestrator | Thursday 09 April 2026 05:24:05 +0000 (0:00:03.214) 0:13:07.644 ******** 2026-04-09 05:24:35.836127 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:24:35.836138 | orchestrator | 2026-04-09 05:24:35.836149 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-09 05:24:35.836177 | orchestrator | Thursday 09 April 2026 05:24:06 +0000 (0:00:01.161) 0:13:08.806 ******** 2026-04-09 05:24:35.836190 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:24:35.836201 | orchestrator | 2026-04-09 05:24:35.836212 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-09 05:24:35.836224 | orchestrator | Thursday 09 April 2026 05:24:08 +0000 (0:00:01.221) 0:13:10.028 ******** 2026-04-09 05:24:35.836235 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:24:35.836246 | orchestrator | 2026-04-09 05:24:35.836257 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-09 05:24:35.836276 | orchestrator | Thursday 09 April 2026 05:24:09 +0000 (0:00:01.250) 0:13:11.278 ******** 2026-04-09 05:24:35.836287 | orchestrator | changed: [testbed-node-1] 2026-04-09 05:24:35.836298 | orchestrator | 2026-04-09 05:24:35.836309 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-09 05:24:35.836320 | orchestrator | Thursday 09 April 2026 05:24:11 +0000 (0:00:02.211) 0:13:13.491 ******** 2026-04-09 05:24:35.836331 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:24:35.836342 | orchestrator | 2026-04-09 05:24:35.836353 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-09 05:24:35.836369 | orchestrator | Thursday 09 April 2026 05:24:13 +0000 (0:00:01.572) 0:13:15.063 ******** 2026-04-09 05:24:35.836381 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:24:35.836392 | orchestrator | 2026-04-09 05:24:35.836402 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-09 05:24:35.836413 | orchestrator | Thursday 09 April 2026 05:24:14 +0000 (0:00:01.468) 0:13:16.531 ******** 2026-04-09 05:24:35.836424 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:24:35.836435 | orchestrator | 2026-04-09 05:24:35.836446 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-09 05:24:35.836457 | orchestrator | Thursday 09 April 2026 05:24:16 +0000 (0:00:01.468) 0:13:17.999 ******** 2026-04-09 05:24:35.836468 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-09 05:24:35.836479 | orchestrator | 2026-04-09 05:24:35.836490 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-09 05:24:35.836501 | orchestrator | Thursday 09 April 2026 05:24:18 +0000 (0:00:01.923) 0:13:19.923 ******** 2026-04-09 05:24:35.836512 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-09 05:24:35.836523 | orchestrator | 2026-04-09 05:24:35.836534 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-09 05:24:35.836545 | orchestrator | Thursday 09 April 2026 05:24:19 +0000 (0:00:01.590) 0:13:21.514 ******** 2026-04-09 05:24:35.836555 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 05:24:35.836566 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-09 05:24:35.836577 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 05:24:35.836588 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-09 05:24:35.836599 | orchestrator | 2026-04-09 05:24:35.836610 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-09 05:24:35.836621 | orchestrator | Thursday 09 April 2026 05:24:23 +0000 (0:00:03.845) 0:13:25.360 ******** 2026-04-09 05:24:35.836631 | orchestrator | changed: [testbed-node-1] 2026-04-09 05:24:35.836642 | orchestrator | 2026-04-09 05:24:35.836653 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-09 05:24:35.836664 | orchestrator | Thursday 09 April 2026 05:24:25 +0000 (0:00:02.134) 0:13:27.494 ******** 2026-04-09 05:24:35.836675 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:24:35.836686 | orchestrator | 2026-04-09 05:24:35.836697 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-09 05:24:35.836708 | orchestrator | Thursday 09 April 2026 05:24:26 +0000 (0:00:01.135) 0:13:28.630 ******** 2026-04-09 05:24:35.836719 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:24:35.836730 | orchestrator | 2026-04-09 05:24:35.836741 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-09 05:24:35.836751 | orchestrator | Thursday 09 April 2026 05:24:27 +0000 (0:00:01.140) 0:13:29.770 ******** 2026-04-09 05:24:35.836762 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:24:35.836773 | orchestrator | 2026-04-09 05:24:35.836784 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-09 05:24:35.836795 | orchestrator | Thursday 09 April 2026 05:24:29 +0000 (0:00:01.790) 0:13:31.561 ******** 2026-04-09 05:24:35.836806 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:24:35.836817 | orchestrator | 2026-04-09 05:24:35.836828 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-09 05:24:35.836865 | orchestrator | Thursday 09 April 2026 05:24:31 +0000 (0:00:01.518) 0:13:33.079 ******** 2026-04-09 05:24:35.836876 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:24:35.836887 | orchestrator | 2026-04-09 05:24:35.836898 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-09 05:24:35.836909 | orchestrator | Thursday 09 April 2026 05:24:31 +0000 (0:00:00.774) 0:13:33.854 ******** 2026-04-09 05:24:35.836920 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1 2026-04-09 05:24:35.836931 | orchestrator | 2026-04-09 05:24:35.836942 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-09 05:24:35.836953 | orchestrator | Thursday 09 April 2026 05:24:33 +0000 (0:00:01.108) 0:13:34.963 ******** 2026-04-09 05:24:35.836964 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:24:35.836975 | orchestrator | 2026-04-09 05:24:35.836986 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-09 05:24:35.836996 | orchestrator | Thursday 09 April 2026 05:24:34 +0000 (0:00:01.118) 0:13:36.081 ******** 2026-04-09 05:24:35.837007 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:24:35.837018 | orchestrator | 2026-04-09 05:24:35.837029 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-09 05:24:35.837040 | orchestrator | Thursday 09 April 2026 05:24:35 +0000 (0:00:01.119) 0:13:37.201 ******** 2026-04-09 05:24:35.837051 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-1 2026-04-09 05:24:35.837062 | orchestrator | 2026-04-09 05:24:35.837079 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-09 05:25:45.287156 | orchestrator | Thursday 09 April 2026 05:24:36 +0000 (0:00:01.198) 0:13:38.400 ******** 2026-04-09 05:25:45.287284 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:25:45.287302 | orchestrator | 2026-04-09 05:25:45.287314 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-09 05:25:45.287326 | orchestrator | Thursday 09 April 2026 05:24:38 +0000 (0:00:02.236) 0:13:40.636 ******** 2026-04-09 05:25:45.287337 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:25:45.287349 | orchestrator | 2026-04-09 05:25:45.287360 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-09 05:25:45.287371 | orchestrator | Thursday 09 April 2026 05:24:40 +0000 (0:00:01.967) 0:13:42.603 ******** 2026-04-09 05:25:45.287382 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:25:45.287393 | orchestrator | 2026-04-09 05:25:45.287404 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-09 05:25:45.287415 | orchestrator | Thursday 09 April 2026 05:24:43 +0000 (0:00:02.462) 0:13:45.066 ******** 2026-04-09 05:25:45.287426 | orchestrator | changed: [testbed-node-1] 2026-04-09 05:25:45.287438 | orchestrator | 2026-04-09 05:25:45.287466 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-09 05:25:45.287478 | orchestrator | Thursday 09 April 2026 05:24:46 +0000 (0:00:02.884) 0:13:47.950 ******** 2026-04-09 05:25:45.287489 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1 2026-04-09 05:25:45.287500 | orchestrator | 2026-04-09 05:25:45.287511 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-09 05:25:45.287522 | orchestrator | Thursday 09 April 2026 05:24:47 +0000 (0:00:01.130) 0:13:49.081 ******** 2026-04-09 05:25:45.287532 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-09 05:25:45.287544 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:25:45.287555 | orchestrator | 2026-04-09 05:25:45.287565 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-09 05:25:45.287576 | orchestrator | Thursday 09 April 2026 05:25:10 +0000 (0:00:23.108) 0:14:12.190 ******** 2026-04-09 05:25:45.287587 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:25:45.287598 | orchestrator | 2026-04-09 05:25:45.287609 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-09 05:25:45.287620 | orchestrator | Thursday 09 April 2026 05:25:13 +0000 (0:00:02.744) 0:14:14.934 ******** 2026-04-09 05:25:45.287654 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:25:45.287668 | orchestrator | 2026-04-09 05:25:45.287681 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-09 05:25:45.287693 | orchestrator | Thursday 09 April 2026 05:25:13 +0000 (0:00:00.762) 0:14:15.696 ******** 2026-04-09 05:25:45.287709 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-09 05:25:45.287726 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-09 05:25:45.287739 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-09 05:25:45.287752 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-09 05:25:45.287767 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-09 05:25:45.287829 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}])  2026-04-09 05:25:45.287845 | orchestrator | 2026-04-09 05:25:45.287858 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-04-09 05:25:45.287871 | orchestrator | Thursday 09 April 2026 05:25:23 +0000 (0:00:09.842) 0:14:25.538 ******** 2026-04-09 05:25:45.287884 | orchestrator | changed: [testbed-node-1] 2026-04-09 05:25:45.287896 | orchestrator | 2026-04-09 05:25:45.287909 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 05:25:45.287921 | orchestrator | Thursday 09 April 2026 05:25:25 +0000 (0:00:02.232) 0:14:27.770 ******** 2026-04-09 05:25:45.287933 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:25:45.287946 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-04-09 05:25:45.287958 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-04-09 05:25:45.287978 | orchestrator | 2026-04-09 05:25:45.287996 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 05:25:45.288024 | orchestrator | Thursday 09 April 2026 05:25:27 +0000 (0:00:01.591) 0:14:29.361 ******** 2026-04-09 05:25:45.288043 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-09 05:25:45.288075 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-09 05:25:45.288094 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-09 05:25:45.288115 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:25:45.288134 | orchestrator | 2026-04-09 05:25:45.288153 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-04-09 05:25:45.288172 | orchestrator | Thursday 09 April 2026 05:25:28 +0000 (0:00:01.031) 0:14:30.393 ******** 2026-04-09 05:25:45.288183 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:25:45.288194 | orchestrator | 2026-04-09 05:25:45.288205 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-04-09 05:25:45.288215 | orchestrator | Thursday 09 April 2026 05:25:29 +0000 (0:00:00.796) 0:14:31.190 ******** 2026-04-09 05:25:45.288226 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:25:45.288237 | orchestrator | 2026-04-09 05:25:45.288247 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-04-09 05:25:45.288259 | orchestrator | 2026-04-09 05:25:45.288277 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-04-09 05:25:45.288296 | orchestrator | Thursday 09 April 2026 05:25:31 +0000 (0:00:02.179) 0:14:33.369 ******** 2026-04-09 05:25:45.288316 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:25:45.288333 | orchestrator | 2026-04-09 05:25:45.288349 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-04-09 05:25:45.288361 | orchestrator | Thursday 09 April 2026 05:25:32 +0000 (0:00:01.180) 0:14:34.550 ******** 2026-04-09 05:25:45.288371 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:25:45.288382 | orchestrator | 2026-04-09 05:25:45.288393 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-04-09 05:25:45.288404 | orchestrator | Thursday 09 April 2026 05:25:33 +0000 (0:00:00.790) 0:14:35.340 ******** 2026-04-09 05:25:45.288420 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:25:45.288438 | orchestrator | 2026-04-09 05:25:45.288454 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-04-09 05:25:45.288465 | orchestrator | Thursday 09 April 2026 05:25:34 +0000 (0:00:00.781) 0:14:36.122 ******** 2026-04-09 05:25:45.288476 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:25:45.288486 | orchestrator | 2026-04-09 05:25:45.288497 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 05:25:45.288508 | orchestrator | Thursday 09 April 2026 05:25:35 +0000 (0:00:00.820) 0:14:36.943 ******** 2026-04-09 05:25:45.288518 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-04-09 05:25:45.288529 | orchestrator | 2026-04-09 05:25:45.288540 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-09 05:25:45.288556 | orchestrator | Thursday 09 April 2026 05:25:36 +0000 (0:00:01.133) 0:14:38.076 ******** 2026-04-09 05:25:45.288575 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:25:45.288593 | orchestrator | 2026-04-09 05:25:45.288612 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-09 05:25:45.288630 | orchestrator | Thursday 09 April 2026 05:25:38 +0000 (0:00:01.805) 0:14:39.882 ******** 2026-04-09 05:25:45.288643 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:25:45.288654 | orchestrator | 2026-04-09 05:25:45.288665 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 05:25:45.288676 | orchestrator | Thursday 09 April 2026 05:25:39 +0000 (0:00:01.145) 0:14:41.027 ******** 2026-04-09 05:25:45.288690 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:25:45.288708 | orchestrator | 2026-04-09 05:25:45.288726 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 05:25:45.288745 | orchestrator | Thursday 09 April 2026 05:25:40 +0000 (0:00:01.461) 0:14:42.488 ******** 2026-04-09 05:25:45.288763 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:25:45.288806 | orchestrator | 2026-04-09 05:25:45.288819 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-09 05:25:45.288838 | orchestrator | Thursday 09 April 2026 05:25:41 +0000 (0:00:01.129) 0:14:43.618 ******** 2026-04-09 05:25:45.288849 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:25:45.288860 | orchestrator | 2026-04-09 05:25:45.288871 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-09 05:25:45.288881 | orchestrator | Thursday 09 April 2026 05:25:42 +0000 (0:00:01.192) 0:14:44.810 ******** 2026-04-09 05:25:45.288892 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:25:45.288903 | orchestrator | 2026-04-09 05:25:45.288913 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-09 05:25:45.288929 | orchestrator | Thursday 09 April 2026 05:25:44 +0000 (0:00:01.171) 0:14:45.981 ******** 2026-04-09 05:25:45.288947 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:25:45.288966 | orchestrator | 2026-04-09 05:25:45.288985 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-09 05:25:45.289008 | orchestrator | Thursday 09 April 2026 05:25:45 +0000 (0:00:01.165) 0:14:47.146 ******** 2026-04-09 05:26:10.680852 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:26:10.680967 | orchestrator | 2026-04-09 05:26:10.680984 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-09 05:26:10.680998 | orchestrator | Thursday 09 April 2026 05:25:46 +0000 (0:00:01.128) 0:14:48.275 ******** 2026-04-09 05:26:10.681010 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:26:10.681022 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:26:10.681033 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-09 05:26:10.681045 | orchestrator | 2026-04-09 05:26:10.681057 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-09 05:26:10.681068 | orchestrator | Thursday 09 April 2026 05:25:48 +0000 (0:00:01.948) 0:14:50.223 ******** 2026-04-09 05:26:10.681079 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:26:10.681090 | orchestrator | 2026-04-09 05:26:10.681102 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-09 05:26:10.681130 | orchestrator | Thursday 09 April 2026 05:25:49 +0000 (0:00:01.247) 0:14:51.470 ******** 2026-04-09 05:26:10.681141 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:26:10.681152 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:26:10.681164 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-09 05:26:10.681176 | orchestrator | 2026-04-09 05:26:10.681187 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-09 05:26:10.681199 | orchestrator | Thursday 09 April 2026 05:25:52 +0000 (0:00:03.327) 0:14:54.798 ******** 2026-04-09 05:26:10.681210 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-09 05:26:10.681222 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-09 05:26:10.681234 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-09 05:26:10.681245 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:26:10.681257 | orchestrator | 2026-04-09 05:26:10.681268 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-09 05:26:10.681279 | orchestrator | Thursday 09 April 2026 05:25:54 +0000 (0:00:01.748) 0:14:56.546 ******** 2026-04-09 05:26:10.681293 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 05:26:10.681307 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 05:26:10.681319 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 05:26:10.681353 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:26:10.681369 | orchestrator | 2026-04-09 05:26:10.681382 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-09 05:26:10.681396 | orchestrator | Thursday 09 April 2026 05:25:56 +0000 (0:00:01.947) 0:14:58.494 ******** 2026-04-09 05:26:10.681412 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:26:10.681428 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:26:10.681442 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:26:10.681455 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:26:10.681468 | orchestrator | 2026-04-09 05:26:10.681482 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-09 05:26:10.681496 | orchestrator | Thursday 09 April 2026 05:25:57 +0000 (0:00:01.276) 0:14:59.770 ******** 2026-04-09 05:26:10.681531 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '69d38aa54653', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-09 05:25:50.119794', 'end': '2026-04-09 05:25:50.182898', 'delta': '0:00:00.063104', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['69d38aa54653'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-09 05:26:10.681554 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '3e7867c40460', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-09 05:25:51.106455', 'end': '2026-04-09 05:25:51.157518', 'delta': '0:00:00.051063', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3e7867c40460'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-09 05:26:10.681568 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '66330ed4242e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-09 05:25:51.700075', 'end': '2026-04-09 05:25:51.752455', 'delta': '0:00:00.052380', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['66330ed4242e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-09 05:26:10.681588 | orchestrator | 2026-04-09 05:26:10.681601 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-09 05:26:10.681614 | orchestrator | Thursday 09 April 2026 05:25:59 +0000 (0:00:01.183) 0:15:00.954 ******** 2026-04-09 05:26:10.681627 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:26:10.681640 | orchestrator | 2026-04-09 05:26:10.681653 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-09 05:26:10.681666 | orchestrator | Thursday 09 April 2026 05:26:00 +0000 (0:00:01.266) 0:15:02.220 ******** 2026-04-09 05:26:10.681679 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:26:10.681692 | orchestrator | 2026-04-09 05:26:10.681704 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-09 05:26:10.681715 | orchestrator | Thursday 09 April 2026 05:26:01 +0000 (0:00:01.251) 0:15:03.472 ******** 2026-04-09 05:26:10.681726 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:26:10.681738 | orchestrator | 2026-04-09 05:26:10.681749 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-09 05:26:10.681779 | orchestrator | Thursday 09 April 2026 05:26:02 +0000 (0:00:01.177) 0:15:04.649 ******** 2026-04-09 05:26:10.681791 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-09 05:26:10.681803 | orchestrator | 2026-04-09 05:26:10.681814 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 05:26:10.681825 | orchestrator | Thursday 09 April 2026 05:26:04 +0000 (0:00:02.015) 0:15:06.665 ******** 2026-04-09 05:26:10.681855 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:26:10.681866 | orchestrator | 2026-04-09 05:26:10.681877 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-09 05:26:10.681888 | orchestrator | Thursday 09 April 2026 05:26:05 +0000 (0:00:01.130) 0:15:07.796 ******** 2026-04-09 05:26:10.681899 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:26:10.681910 | orchestrator | 2026-04-09 05:26:10.681921 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-09 05:26:10.681932 | orchestrator | Thursday 09 April 2026 05:26:07 +0000 (0:00:01.145) 0:15:08.942 ******** 2026-04-09 05:26:10.681943 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:26:10.681954 | orchestrator | 2026-04-09 05:26:10.681965 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 05:26:10.681976 | orchestrator | Thursday 09 April 2026 05:26:08 +0000 (0:00:01.255) 0:15:10.197 ******** 2026-04-09 05:26:10.681987 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:26:10.681998 | orchestrator | 2026-04-09 05:26:10.682009 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-09 05:26:10.682075 | orchestrator | Thursday 09 April 2026 05:26:09 +0000 (0:00:01.139) 0:15:11.337 ******** 2026-04-09 05:26:10.682096 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:26:10.682107 | orchestrator | 2026-04-09 05:26:10.682118 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-09 05:26:10.682137 | orchestrator | Thursday 09 April 2026 05:26:10 +0000 (0:00:01.201) 0:15:12.538 ******** 2026-04-09 05:26:17.872368 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:26:17.872508 | orchestrator | 2026-04-09 05:26:17.872542 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-09 05:26:17.872565 | orchestrator | Thursday 09 April 2026 05:26:11 +0000 (0:00:01.152) 0:15:13.691 ******** 2026-04-09 05:26:17.872584 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:26:17.872600 | orchestrator | 2026-04-09 05:26:17.872618 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-09 05:26:17.872637 | orchestrator | Thursday 09 April 2026 05:26:12 +0000 (0:00:01.120) 0:15:14.811 ******** 2026-04-09 05:26:17.872686 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:26:17.872708 | orchestrator | 2026-04-09 05:26:17.872727 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-09 05:26:17.872747 | orchestrator | Thursday 09 April 2026 05:26:14 +0000 (0:00:01.230) 0:15:16.042 ******** 2026-04-09 05:26:17.872802 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:26:17.872821 | orchestrator | 2026-04-09 05:26:17.872856 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-09 05:26:17.872878 | orchestrator | Thursday 09 April 2026 05:26:15 +0000 (0:00:01.208) 0:15:17.251 ******** 2026-04-09 05:26:17.872897 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:26:17.872918 | orchestrator | 2026-04-09 05:26:17.872939 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-09 05:26:17.872959 | orchestrator | Thursday 09 April 2026 05:26:16 +0000 (0:00:01.121) 0:15:18.372 ******** 2026-04-09 05:26:17.872980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:26:17.872998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:26:17.873012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:26:17.873035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-14-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-09 05:26:17.873058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:26:17.873080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:26:17.873126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:26:17.873179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'dc1c8a18', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part16', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part14', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part15', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part1', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 05:26:17.873204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:26:17.873225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:26:17.873245 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:26:17.873263 | orchestrator | 2026-04-09 05:26:17.873284 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-09 05:26:17.873305 | orchestrator | Thursday 09 April 2026 05:26:17 +0000 (0:00:01.289) 0:15:19.661 ******** 2026-04-09 05:26:17.873325 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:26:17.873371 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:26:26.673524 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:26:26.673638 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-14-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:26:26.673656 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:26:26.673669 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:26:26.673681 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:26:26.673792 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'dc1c8a18', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part16', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part14', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part15', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part1', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:26:26.673813 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:26:26.673825 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:26:26.673838 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:26:26.673852 | orchestrator | 2026-04-09 05:26:26.673864 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-09 05:26:26.673877 | orchestrator | Thursday 09 April 2026 05:26:19 +0000 (0:00:01.238) 0:15:20.900 ******** 2026-04-09 05:26:26.673896 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:26:26.673909 | orchestrator | 2026-04-09 05:26:26.673920 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-09 05:26:26.673932 | orchestrator | Thursday 09 April 2026 05:26:20 +0000 (0:00:01.555) 0:15:22.456 ******** 2026-04-09 05:26:26.673943 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:26:26.673954 | orchestrator | 2026-04-09 05:26:26.673965 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 05:26:26.673976 | orchestrator | Thursday 09 April 2026 05:26:21 +0000 (0:00:01.116) 0:15:23.572 ******** 2026-04-09 05:26:26.673987 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:26:26.673999 | orchestrator | 2026-04-09 05:26:26.674010 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 05:26:26.674087 | orchestrator | Thursday 09 April 2026 05:26:23 +0000 (0:00:01.412) 0:15:24.985 ******** 2026-04-09 05:26:26.674101 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:26:26.674114 | orchestrator | 2026-04-09 05:26:26.674128 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 05:26:26.674141 | orchestrator | Thursday 09 April 2026 05:26:24 +0000 (0:00:01.128) 0:15:26.114 ******** 2026-04-09 05:26:26.674154 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:26:26.674167 | orchestrator | 2026-04-09 05:26:26.674181 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 05:26:26.674193 | orchestrator | Thursday 09 April 2026 05:26:25 +0000 (0:00:01.284) 0:15:27.398 ******** 2026-04-09 05:26:26.674207 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:26:26.674220 | orchestrator | 2026-04-09 05:26:26.674232 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 05:26:26.674254 | orchestrator | Thursday 09 April 2026 05:26:26 +0000 (0:00:01.139) 0:15:28.537 ******** 2026-04-09 05:27:05.099799 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-09 05:27:05.099905 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-09 05:27:05.099918 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-09 05:27:05.099929 | orchestrator | 2026-04-09 05:27:05.099939 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 05:27:05.099966 | orchestrator | Thursday 09 April 2026 05:26:28 +0000 (0:00:02.009) 0:15:30.547 ******** 2026-04-09 05:27:05.099976 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-09 05:27:05.099985 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-09 05:27:05.099994 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-09 05:27:05.100003 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:05.100012 | orchestrator | 2026-04-09 05:27:05.100021 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-09 05:27:05.100030 | orchestrator | Thursday 09 April 2026 05:26:29 +0000 (0:00:01.290) 0:15:31.838 ******** 2026-04-09 05:27:05.100039 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:05.100048 | orchestrator | 2026-04-09 05:27:05.100057 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-09 05:27:05.100065 | orchestrator | Thursday 09 April 2026 05:26:31 +0000 (0:00:01.182) 0:15:33.020 ******** 2026-04-09 05:27:05.100074 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:27:05.100083 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:27:05.100092 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-09 05:27:05.100101 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 05:27:05.100110 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 05:27:05.100118 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 05:27:05.100127 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 05:27:05.100156 | orchestrator | 2026-04-09 05:27:05.100166 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-09 05:27:05.100174 | orchestrator | Thursday 09 April 2026 05:26:32 +0000 (0:00:01.826) 0:15:34.847 ******** 2026-04-09 05:27:05.100183 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:27:05.100192 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:27:05.100201 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-09 05:27:05.100209 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 05:27:05.100218 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 05:27:05.100226 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 05:27:05.100235 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 05:27:05.100244 | orchestrator | 2026-04-09 05:27:05.100253 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-04-09 05:27:05.100261 | orchestrator | Thursday 09 April 2026 05:26:35 +0000 (0:00:02.200) 0:15:37.048 ******** 2026-04-09 05:27:05.100270 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:05.100279 | orchestrator | 2026-04-09 05:27:05.100288 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-04-09 05:27:05.100297 | orchestrator | Thursday 09 April 2026 05:26:36 +0000 (0:00:00.878) 0:15:37.926 ******** 2026-04-09 05:27:05.100306 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:05.100316 | orchestrator | 2026-04-09 05:27:05.100326 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-04-09 05:27:05.100337 | orchestrator | Thursday 09 April 2026 05:26:36 +0000 (0:00:00.870) 0:15:38.797 ******** 2026-04-09 05:27:05.100348 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:05.100358 | orchestrator | 2026-04-09 05:27:05.100368 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-04-09 05:27:05.100379 | orchestrator | Thursday 09 April 2026 05:26:37 +0000 (0:00:00.809) 0:15:39.607 ******** 2026-04-09 05:27:05.100389 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:05.100399 | orchestrator | 2026-04-09 05:27:05.100409 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-04-09 05:27:05.100420 | orchestrator | Thursday 09 April 2026 05:26:38 +0000 (0:00:00.902) 0:15:40.509 ******** 2026-04-09 05:27:05.100430 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:05.100441 | orchestrator | 2026-04-09 05:27:05.100451 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-04-09 05:27:05.100462 | orchestrator | Thursday 09 April 2026 05:26:39 +0000 (0:00:00.802) 0:15:41.311 ******** 2026-04-09 05:27:05.100472 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-09 05:27:05.100483 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-09 05:27:05.100493 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-09 05:27:05.100504 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:05.100514 | orchestrator | 2026-04-09 05:27:05.100524 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-04-09 05:27:05.100534 | orchestrator | Thursday 09 April 2026 05:26:40 +0000 (0:00:01.078) 0:15:42.390 ******** 2026-04-09 05:27:05.100544 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-04-09 05:27:05.100554 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-04-09 05:27:05.100579 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-04-09 05:27:05.100590 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-04-09 05:27:05.100601 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-04-09 05:27:05.100612 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-04-09 05:27:05.100631 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:05.100641 | orchestrator | 2026-04-09 05:27:05.100649 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-04-09 05:27:05.100658 | orchestrator | Thursday 09 April 2026 05:26:42 +0000 (0:00:01.666) 0:15:44.056 ******** 2026-04-09 05:27:05.100667 | orchestrator | changed: [testbed-node-2] => (item=testbed-node-2) 2026-04-09 05:27:05.100676 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-09 05:27:05.100684 | orchestrator | 2026-04-09 05:27:05.100693 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-04-09 05:27:05.100702 | orchestrator | Thursday 09 April 2026 05:26:45 +0000 (0:00:03.197) 0:15:47.254 ******** 2026-04-09 05:27:05.100711 | orchestrator | changed: [testbed-node-2] 2026-04-09 05:27:05.100719 | orchestrator | 2026-04-09 05:27:05.100781 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 05:27:05.100791 | orchestrator | Thursday 09 April 2026 05:26:47 +0000 (0:00:02.164) 0:15:49.419 ******** 2026-04-09 05:27:05.100799 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-04-09 05:27:05.100809 | orchestrator | 2026-04-09 05:27:05.100818 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 05:27:05.100826 | orchestrator | Thursday 09 April 2026 05:26:48 +0000 (0:00:01.220) 0:15:50.640 ******** 2026-04-09 05:27:05.100835 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-04-09 05:27:05.100844 | orchestrator | 2026-04-09 05:27:05.100853 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 05:27:05.100862 | orchestrator | Thursday 09 April 2026 05:26:49 +0000 (0:00:01.104) 0:15:51.745 ******** 2026-04-09 05:27:05.100870 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:27:05.100879 | orchestrator | 2026-04-09 05:27:05.100888 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 05:27:05.100896 | orchestrator | Thursday 09 April 2026 05:26:51 +0000 (0:00:01.568) 0:15:53.313 ******** 2026-04-09 05:27:05.100905 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:05.100914 | orchestrator | 2026-04-09 05:27:05.100922 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 05:27:05.100931 | orchestrator | Thursday 09 April 2026 05:26:52 +0000 (0:00:01.161) 0:15:54.475 ******** 2026-04-09 05:27:05.100939 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:05.100948 | orchestrator | 2026-04-09 05:27:05.100957 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 05:27:05.100965 | orchestrator | Thursday 09 April 2026 05:26:53 +0000 (0:00:01.129) 0:15:55.604 ******** 2026-04-09 05:27:05.100974 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:05.100982 | orchestrator | 2026-04-09 05:27:05.100991 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 05:27:05.100999 | orchestrator | Thursday 09 April 2026 05:26:54 +0000 (0:00:01.140) 0:15:56.744 ******** 2026-04-09 05:27:05.101008 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:27:05.101017 | orchestrator | 2026-04-09 05:27:05.101026 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 05:27:05.101034 | orchestrator | Thursday 09 April 2026 05:26:56 +0000 (0:00:01.510) 0:15:58.255 ******** 2026-04-09 05:27:05.101043 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:05.101051 | orchestrator | 2026-04-09 05:27:05.101060 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 05:27:05.101069 | orchestrator | Thursday 09 April 2026 05:26:57 +0000 (0:00:01.193) 0:15:59.448 ******** 2026-04-09 05:27:05.101077 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:05.101086 | orchestrator | 2026-04-09 05:27:05.101094 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 05:27:05.101103 | orchestrator | Thursday 09 April 2026 05:26:58 +0000 (0:00:01.160) 0:16:00.609 ******** 2026-04-09 05:27:05.101118 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:27:05.101127 | orchestrator | 2026-04-09 05:27:05.101135 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 05:27:05.101144 | orchestrator | Thursday 09 April 2026 05:27:00 +0000 (0:00:01.590) 0:16:02.200 ******** 2026-04-09 05:27:05.101153 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:27:05.101161 | orchestrator | 2026-04-09 05:27:05.101170 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 05:27:05.101179 | orchestrator | Thursday 09 April 2026 05:27:01 +0000 (0:00:01.543) 0:16:03.743 ******** 2026-04-09 05:27:05.101187 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:05.101196 | orchestrator | 2026-04-09 05:27:05.101204 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 05:27:05.101213 | orchestrator | Thursday 09 April 2026 05:27:02 +0000 (0:00:00.774) 0:16:04.518 ******** 2026-04-09 05:27:05.101222 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:27:05.101230 | orchestrator | 2026-04-09 05:27:05.101239 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 05:27:05.101247 | orchestrator | Thursday 09 April 2026 05:27:03 +0000 (0:00:00.810) 0:16:05.329 ******** 2026-04-09 05:27:05.101256 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:05.101265 | orchestrator | 2026-04-09 05:27:05.101273 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 05:27:05.101282 | orchestrator | Thursday 09 April 2026 05:27:04 +0000 (0:00:00.802) 0:16:06.131 ******** 2026-04-09 05:27:05.101291 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:05.101299 | orchestrator | 2026-04-09 05:27:05.101308 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 05:27:05.101316 | orchestrator | Thursday 09 April 2026 05:27:05 +0000 (0:00:00.788) 0:16:06.920 ******** 2026-04-09 05:27:05.101331 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.391211 | orchestrator | 2026-04-09 05:27:45.391327 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 05:27:45.391344 | orchestrator | Thursday 09 April 2026 05:27:05 +0000 (0:00:00.775) 0:16:07.696 ******** 2026-04-09 05:27:45.391357 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.391369 | orchestrator | 2026-04-09 05:27:45.391380 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 05:27:45.391409 | orchestrator | Thursday 09 April 2026 05:27:06 +0000 (0:00:00.792) 0:16:08.488 ******** 2026-04-09 05:27:45.391421 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.391431 | orchestrator | 2026-04-09 05:27:45.391443 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 05:27:45.391454 | orchestrator | Thursday 09 April 2026 05:27:07 +0000 (0:00:00.769) 0:16:09.257 ******** 2026-04-09 05:27:45.391466 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:27:45.391477 | orchestrator | 2026-04-09 05:27:45.391488 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 05:27:45.391499 | orchestrator | Thursday 09 April 2026 05:27:08 +0000 (0:00:00.791) 0:16:10.049 ******** 2026-04-09 05:27:45.391510 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:27:45.391520 | orchestrator | 2026-04-09 05:27:45.391531 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 05:27:45.391542 | orchestrator | Thursday 09 April 2026 05:27:09 +0000 (0:00:00.842) 0:16:10.891 ******** 2026-04-09 05:27:45.391553 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:27:45.391564 | orchestrator | 2026-04-09 05:27:45.391575 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-09 05:27:45.391586 | orchestrator | Thursday 09 April 2026 05:27:09 +0000 (0:00:00.819) 0:16:11.711 ******** 2026-04-09 05:27:45.391596 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.391607 | orchestrator | 2026-04-09 05:27:45.391618 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-09 05:27:45.391628 | orchestrator | Thursday 09 April 2026 05:27:10 +0000 (0:00:00.836) 0:16:12.548 ******** 2026-04-09 05:27:45.391661 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.391673 | orchestrator | 2026-04-09 05:27:45.391684 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-09 05:27:45.391695 | orchestrator | Thursday 09 April 2026 05:27:11 +0000 (0:00:00.795) 0:16:13.343 ******** 2026-04-09 05:27:45.391734 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.391746 | orchestrator | 2026-04-09 05:27:45.391759 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-09 05:27:45.391773 | orchestrator | Thursday 09 April 2026 05:27:12 +0000 (0:00:00.790) 0:16:14.134 ******** 2026-04-09 05:27:45.391785 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.391798 | orchestrator | 2026-04-09 05:27:45.391810 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-09 05:27:45.391822 | orchestrator | Thursday 09 April 2026 05:27:13 +0000 (0:00:00.828) 0:16:14.962 ******** 2026-04-09 05:27:45.391834 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.391847 | orchestrator | 2026-04-09 05:27:45.391860 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-09 05:27:45.391872 | orchestrator | Thursday 09 April 2026 05:27:13 +0000 (0:00:00.777) 0:16:15.739 ******** 2026-04-09 05:27:45.391885 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.391897 | orchestrator | 2026-04-09 05:27:45.391910 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-09 05:27:45.391924 | orchestrator | Thursday 09 April 2026 05:27:14 +0000 (0:00:00.786) 0:16:16.526 ******** 2026-04-09 05:27:45.391936 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.391947 | orchestrator | 2026-04-09 05:27:45.391958 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-09 05:27:45.391970 | orchestrator | Thursday 09 April 2026 05:27:15 +0000 (0:00:00.793) 0:16:17.320 ******** 2026-04-09 05:27:45.391980 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.391991 | orchestrator | 2026-04-09 05:27:45.392002 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-09 05:27:45.392013 | orchestrator | Thursday 09 April 2026 05:27:16 +0000 (0:00:00.770) 0:16:18.091 ******** 2026-04-09 05:27:45.392024 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.392034 | orchestrator | 2026-04-09 05:27:45.392045 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-09 05:27:45.392056 | orchestrator | Thursday 09 April 2026 05:27:16 +0000 (0:00:00.761) 0:16:18.853 ******** 2026-04-09 05:27:45.392067 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.392077 | orchestrator | 2026-04-09 05:27:45.392088 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-09 05:27:45.392099 | orchestrator | Thursday 09 April 2026 05:27:17 +0000 (0:00:00.754) 0:16:19.607 ******** 2026-04-09 05:27:45.392110 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.392121 | orchestrator | 2026-04-09 05:27:45.392132 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-09 05:27:45.392143 | orchestrator | Thursday 09 April 2026 05:27:18 +0000 (0:00:00.768) 0:16:20.376 ******** 2026-04-09 05:27:45.392153 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.392164 | orchestrator | 2026-04-09 05:27:45.392175 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-09 05:27:45.392186 | orchestrator | Thursday 09 April 2026 05:27:19 +0000 (0:00:00.786) 0:16:21.163 ******** 2026-04-09 05:27:45.392196 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:27:45.392207 | orchestrator | 2026-04-09 05:27:45.392218 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-09 05:27:45.392229 | orchestrator | Thursday 09 April 2026 05:27:20 +0000 (0:00:01.574) 0:16:22.737 ******** 2026-04-09 05:27:45.392240 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:27:45.392251 | orchestrator | 2026-04-09 05:27:45.392261 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-09 05:27:45.392272 | orchestrator | Thursday 09 April 2026 05:27:22 +0000 (0:00:02.008) 0:16:24.746 ******** 2026-04-09 05:27:45.392292 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-04-09 05:27:45.392303 | orchestrator | 2026-04-09 05:27:45.392331 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-09 05:27:45.392357 | orchestrator | Thursday 09 April 2026 05:27:24 +0000 (0:00:01.192) 0:16:25.939 ******** 2026-04-09 05:27:45.392368 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.392379 | orchestrator | 2026-04-09 05:27:45.392390 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-09 05:27:45.392401 | orchestrator | Thursday 09 April 2026 05:27:25 +0000 (0:00:01.133) 0:16:27.073 ******** 2026-04-09 05:27:45.392412 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.392423 | orchestrator | 2026-04-09 05:27:45.392434 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-09 05:27:45.392445 | orchestrator | Thursday 09 April 2026 05:27:26 +0000 (0:00:01.161) 0:16:28.234 ******** 2026-04-09 05:27:45.392455 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 05:27:45.392466 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 05:27:45.392478 | orchestrator | 2026-04-09 05:27:45.392489 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-09 05:27:45.392500 | orchestrator | Thursday 09 April 2026 05:27:28 +0000 (0:00:01.859) 0:16:30.094 ******** 2026-04-09 05:27:45.392511 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:27:45.392522 | orchestrator | 2026-04-09 05:27:45.392626 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-09 05:27:45.392646 | orchestrator | Thursday 09 April 2026 05:27:29 +0000 (0:00:01.505) 0:16:31.599 ******** 2026-04-09 05:27:45.392657 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.392668 | orchestrator | 2026-04-09 05:27:45.392679 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-09 05:27:45.392690 | orchestrator | Thursday 09 April 2026 05:27:30 +0000 (0:00:01.182) 0:16:32.782 ******** 2026-04-09 05:27:45.392723 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.392735 | orchestrator | 2026-04-09 05:27:45.392746 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-09 05:27:45.392757 | orchestrator | Thursday 09 April 2026 05:27:31 +0000 (0:00:00.779) 0:16:33.561 ******** 2026-04-09 05:27:45.392767 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.392778 | orchestrator | 2026-04-09 05:27:45.392789 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-09 05:27:45.392800 | orchestrator | Thursday 09 April 2026 05:27:32 +0000 (0:00:00.792) 0:16:34.353 ******** 2026-04-09 05:27:45.392810 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-04-09 05:27:45.392821 | orchestrator | 2026-04-09 05:27:45.392832 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-09 05:27:45.392843 | orchestrator | Thursday 09 April 2026 05:27:33 +0000 (0:00:01.160) 0:16:35.513 ******** 2026-04-09 05:27:45.392853 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:27:45.392864 | orchestrator | 2026-04-09 05:27:45.392875 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-09 05:27:45.392886 | orchestrator | Thursday 09 April 2026 05:27:35 +0000 (0:00:01.876) 0:16:37.390 ******** 2026-04-09 05:27:45.392897 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 05:27:45.392907 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 05:27:45.392918 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 05:27:45.392929 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.392939 | orchestrator | 2026-04-09 05:27:45.392950 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-09 05:27:45.392961 | orchestrator | Thursday 09 April 2026 05:27:36 +0000 (0:00:01.137) 0:16:38.527 ******** 2026-04-09 05:27:45.392981 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.392992 | orchestrator | 2026-04-09 05:27:45.393003 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-09 05:27:45.393014 | orchestrator | Thursday 09 April 2026 05:27:37 +0000 (0:00:01.128) 0:16:39.656 ******** 2026-04-09 05:27:45.393024 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.393035 | orchestrator | 2026-04-09 05:27:45.393046 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-09 05:27:45.393057 | orchestrator | Thursday 09 April 2026 05:27:38 +0000 (0:00:01.171) 0:16:40.828 ******** 2026-04-09 05:27:45.393068 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.393079 | orchestrator | 2026-04-09 05:27:45.393090 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-09 05:27:45.393100 | orchestrator | Thursday 09 April 2026 05:27:40 +0000 (0:00:01.186) 0:16:42.014 ******** 2026-04-09 05:27:45.393111 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.393122 | orchestrator | 2026-04-09 05:27:45.393132 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-09 05:27:45.393143 | orchestrator | Thursday 09 April 2026 05:27:41 +0000 (0:00:01.133) 0:16:43.148 ******** 2026-04-09 05:27:45.393154 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:27:45.393164 | orchestrator | 2026-04-09 05:27:45.393175 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-09 05:27:45.393186 | orchestrator | Thursday 09 April 2026 05:27:42 +0000 (0:00:00.844) 0:16:43.992 ******** 2026-04-09 05:27:45.393197 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:27:45.393208 | orchestrator | 2026-04-09 05:27:45.393219 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-09 05:27:45.393229 | orchestrator | Thursday 09 April 2026 05:27:44 +0000 (0:00:02.230) 0:16:46.222 ******** 2026-04-09 05:27:45.393240 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:27:45.393251 | orchestrator | 2026-04-09 05:27:45.393262 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-09 05:27:45.393272 | orchestrator | Thursday 09 April 2026 05:27:45 +0000 (0:00:00.784) 0:16:47.007 ******** 2026-04-09 05:27:45.393283 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-04-09 05:27:45.393294 | orchestrator | 2026-04-09 05:27:45.393314 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-09 05:28:22.253873 | orchestrator | Thursday 09 April 2026 05:27:46 +0000 (0:00:01.098) 0:16:48.105 ******** 2026-04-09 05:28:22.253991 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.254008 | orchestrator | 2026-04-09 05:28:22.254079 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-09 05:28:22.254108 | orchestrator | Thursday 09 April 2026 05:27:47 +0000 (0:00:01.131) 0:16:49.237 ******** 2026-04-09 05:28:22.254120 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.254131 | orchestrator | 2026-04-09 05:28:22.254142 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-09 05:28:22.254153 | orchestrator | Thursday 09 April 2026 05:27:48 +0000 (0:00:01.120) 0:16:50.358 ******** 2026-04-09 05:28:22.254164 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.254175 | orchestrator | 2026-04-09 05:28:22.254186 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-09 05:28:22.254197 | orchestrator | Thursday 09 April 2026 05:27:49 +0000 (0:00:01.159) 0:16:51.517 ******** 2026-04-09 05:28:22.254208 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.254218 | orchestrator | 2026-04-09 05:28:22.254230 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-09 05:28:22.254240 | orchestrator | Thursday 09 April 2026 05:27:50 +0000 (0:00:01.151) 0:16:52.669 ******** 2026-04-09 05:28:22.254251 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.254262 | orchestrator | 2026-04-09 05:28:22.254274 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-09 05:28:22.254308 | orchestrator | Thursday 09 April 2026 05:27:51 +0000 (0:00:01.132) 0:16:53.801 ******** 2026-04-09 05:28:22.254319 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.254330 | orchestrator | 2026-04-09 05:28:22.254341 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-09 05:28:22.254352 | orchestrator | Thursday 09 April 2026 05:27:53 +0000 (0:00:01.176) 0:16:54.978 ******** 2026-04-09 05:28:22.254363 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.254373 | orchestrator | 2026-04-09 05:28:22.254384 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-09 05:28:22.254395 | orchestrator | Thursday 09 April 2026 05:27:54 +0000 (0:00:01.177) 0:16:56.156 ******** 2026-04-09 05:28:22.254408 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.254420 | orchestrator | 2026-04-09 05:28:22.254433 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-09 05:28:22.254446 | orchestrator | Thursday 09 April 2026 05:27:55 +0000 (0:00:01.231) 0:16:57.388 ******** 2026-04-09 05:28:22.254458 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:28:22.254472 | orchestrator | 2026-04-09 05:28:22.254484 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-09 05:28:22.254496 | orchestrator | Thursday 09 April 2026 05:27:56 +0000 (0:00:00.789) 0:16:58.178 ******** 2026-04-09 05:28:22.254508 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-04-09 05:28:22.254522 | orchestrator | 2026-04-09 05:28:22.254534 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-09 05:28:22.254547 | orchestrator | Thursday 09 April 2026 05:27:57 +0000 (0:00:01.115) 0:16:59.294 ******** 2026-04-09 05:28:22.254560 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-04-09 05:28:22.254573 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-09 05:28:22.254584 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-09 05:28:22.254595 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-09 05:28:22.254605 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-09 05:28:22.254616 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-09 05:28:22.254627 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-09 05:28:22.254637 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-09 05:28:22.254649 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 05:28:22.254659 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 05:28:22.254670 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 05:28:22.254681 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 05:28:22.254721 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 05:28:22.254732 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 05:28:22.254743 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-04-09 05:28:22.254754 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-04-09 05:28:22.254765 | orchestrator | 2026-04-09 05:28:22.254776 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-09 05:28:22.254787 | orchestrator | Thursday 09 April 2026 05:28:03 +0000 (0:00:06.536) 0:17:05.830 ******** 2026-04-09 05:28:22.254797 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.254808 | orchestrator | 2026-04-09 05:28:22.254819 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-09 05:28:22.254830 | orchestrator | Thursday 09 April 2026 05:28:04 +0000 (0:00:00.825) 0:17:06.655 ******** 2026-04-09 05:28:22.254841 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.254852 | orchestrator | 2026-04-09 05:28:22.254863 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-09 05:28:22.254874 | orchestrator | Thursday 09 April 2026 05:28:05 +0000 (0:00:00.758) 0:17:07.413 ******** 2026-04-09 05:28:22.254893 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.254904 | orchestrator | 2026-04-09 05:28:22.254915 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-09 05:28:22.254926 | orchestrator | Thursday 09 April 2026 05:28:06 +0000 (0:00:00.772) 0:17:08.186 ******** 2026-04-09 05:28:22.254937 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.254948 | orchestrator | 2026-04-09 05:28:22.254959 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-09 05:28:22.254988 | orchestrator | Thursday 09 April 2026 05:28:07 +0000 (0:00:00.778) 0:17:08.964 ******** 2026-04-09 05:28:22.255000 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.255011 | orchestrator | 2026-04-09 05:28:22.255022 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-09 05:28:22.255038 | orchestrator | Thursday 09 April 2026 05:28:07 +0000 (0:00:00.775) 0:17:09.740 ******** 2026-04-09 05:28:22.255052 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.255070 | orchestrator | 2026-04-09 05:28:22.255088 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-09 05:28:22.255106 | orchestrator | Thursday 09 April 2026 05:28:08 +0000 (0:00:00.778) 0:17:10.518 ******** 2026-04-09 05:28:22.255123 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.255140 | orchestrator | 2026-04-09 05:28:22.255157 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-09 05:28:22.255174 | orchestrator | Thursday 09 April 2026 05:28:09 +0000 (0:00:00.822) 0:17:11.341 ******** 2026-04-09 05:28:22.255190 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.255208 | orchestrator | 2026-04-09 05:28:22.255225 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-09 05:28:22.255242 | orchestrator | Thursday 09 April 2026 05:28:10 +0000 (0:00:00.841) 0:17:12.183 ******** 2026-04-09 05:28:22.255258 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.255275 | orchestrator | 2026-04-09 05:28:22.255293 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-09 05:28:22.255310 | orchestrator | Thursday 09 April 2026 05:28:11 +0000 (0:00:00.790) 0:17:12.973 ******** 2026-04-09 05:28:22.255327 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.255345 | orchestrator | 2026-04-09 05:28:22.255364 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-09 05:28:22.255382 | orchestrator | Thursday 09 April 2026 05:28:11 +0000 (0:00:00.763) 0:17:13.737 ******** 2026-04-09 05:28:22.255400 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.255419 | orchestrator | 2026-04-09 05:28:22.255437 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-09 05:28:22.255456 | orchestrator | Thursday 09 April 2026 05:28:12 +0000 (0:00:00.840) 0:17:14.577 ******** 2026-04-09 05:28:22.255467 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.255478 | orchestrator | 2026-04-09 05:28:22.255489 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-09 05:28:22.255500 | orchestrator | Thursday 09 April 2026 05:28:13 +0000 (0:00:00.848) 0:17:15.426 ******** 2026-04-09 05:28:22.255510 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.255521 | orchestrator | 2026-04-09 05:28:22.255531 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-09 05:28:22.255542 | orchestrator | Thursday 09 April 2026 05:28:14 +0000 (0:00:00.870) 0:17:16.297 ******** 2026-04-09 05:28:22.255553 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.255564 | orchestrator | 2026-04-09 05:28:22.255574 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-09 05:28:22.255585 | orchestrator | Thursday 09 April 2026 05:28:15 +0000 (0:00:00.746) 0:17:17.043 ******** 2026-04-09 05:28:22.255596 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.255606 | orchestrator | 2026-04-09 05:28:22.255617 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-09 05:28:22.255639 | orchestrator | Thursday 09 April 2026 05:28:16 +0000 (0:00:00.894) 0:17:17.938 ******** 2026-04-09 05:28:22.255650 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.255661 | orchestrator | 2026-04-09 05:28:22.255671 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-09 05:28:22.255726 | orchestrator | Thursday 09 April 2026 05:28:16 +0000 (0:00:00.797) 0:17:18.735 ******** 2026-04-09 05:28:22.255741 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.255752 | orchestrator | 2026-04-09 05:28:22.255763 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 05:28:22.255775 | orchestrator | Thursday 09 April 2026 05:28:17 +0000 (0:00:00.806) 0:17:19.542 ******** 2026-04-09 05:28:22.255786 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.255797 | orchestrator | 2026-04-09 05:28:22.255807 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 05:28:22.255818 | orchestrator | Thursday 09 April 2026 05:28:18 +0000 (0:00:00.789) 0:17:20.331 ******** 2026-04-09 05:28:22.255829 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.255840 | orchestrator | 2026-04-09 05:28:22.255851 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 05:28:22.255861 | orchestrator | Thursday 09 April 2026 05:28:19 +0000 (0:00:00.780) 0:17:21.111 ******** 2026-04-09 05:28:22.255872 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.255883 | orchestrator | 2026-04-09 05:28:22.255894 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 05:28:22.255905 | orchestrator | Thursday 09 April 2026 05:28:20 +0000 (0:00:00.832) 0:17:21.944 ******** 2026-04-09 05:28:22.255916 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.255926 | orchestrator | 2026-04-09 05:28:22.255937 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 05:28:22.255948 | orchestrator | Thursday 09 April 2026 05:28:20 +0000 (0:00:00.833) 0:17:22.778 ******** 2026-04-09 05:28:22.255958 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-09 05:28:22.255969 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-09 05:28:22.255980 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-09 05:28:22.255991 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:28:22.256002 | orchestrator | 2026-04-09 05:28:22.256013 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 05:28:22.256023 | orchestrator | Thursday 09 April 2026 05:28:21 +0000 (0:00:01.070) 0:17:23.848 ******** 2026-04-09 05:28:22.256034 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-09 05:28:22.256056 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-09 05:29:41.528307 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-09 05:29:41.528463 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:29:41.528496 | orchestrator | 2026-04-09 05:29:41.528518 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 05:29:41.528559 | orchestrator | Thursday 09 April 2026 05:28:23 +0000 (0:00:01.068) 0:17:24.917 ******** 2026-04-09 05:29:41.528577 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-09 05:29:41.528596 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-09 05:29:41.528615 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-09 05:29:41.528634 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:29:41.528711 | orchestrator | 2026-04-09 05:29:41.528733 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 05:29:41.528750 | orchestrator | Thursday 09 April 2026 05:28:24 +0000 (0:00:01.054) 0:17:25.971 ******** 2026-04-09 05:29:41.528768 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:29:41.528787 | orchestrator | 2026-04-09 05:29:41.528804 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 05:29:41.528856 | orchestrator | Thursday 09 April 2026 05:28:24 +0000 (0:00:00.791) 0:17:26.763 ******** 2026-04-09 05:29:41.528879 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-09 05:29:41.528897 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:29:41.528916 | orchestrator | 2026-04-09 05:29:41.528933 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-09 05:29:41.528951 | orchestrator | Thursday 09 April 2026 05:28:25 +0000 (0:00:00.898) 0:17:27.662 ******** 2026-04-09 05:29:41.528970 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:29:41.528989 | orchestrator | 2026-04-09 05:29:41.529007 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-09 05:29:41.529028 | orchestrator | Thursday 09 April 2026 05:28:27 +0000 (0:00:01.368) 0:17:29.031 ******** 2026-04-09 05:29:41.529042 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:29:41.529053 | orchestrator | 2026-04-09 05:29:41.529064 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-09 05:29:41.529075 | orchestrator | Thursday 09 April 2026 05:28:27 +0000 (0:00:00.803) 0:17:29.835 ******** 2026-04-09 05:29:41.529091 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-2 2026-04-09 05:29:41.529110 | orchestrator | 2026-04-09 05:29:41.529130 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-09 05:29:41.529148 | orchestrator | Thursday 09 April 2026 05:28:29 +0000 (0:00:01.206) 0:17:31.041 ******** 2026-04-09 05:29:41.529166 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:29:41.529185 | orchestrator | 2026-04-09 05:29:41.529204 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-09 05:29:41.529222 | orchestrator | Thursday 09 April 2026 05:28:32 +0000 (0:00:03.505) 0:17:34.546 ******** 2026-04-09 05:29:41.529240 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:29:41.529259 | orchestrator | 2026-04-09 05:29:41.529277 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-09 05:29:41.529295 | orchestrator | Thursday 09 April 2026 05:28:33 +0000 (0:00:01.166) 0:17:35.714 ******** 2026-04-09 05:29:41.529315 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:29:41.529328 | orchestrator | 2026-04-09 05:29:41.529339 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-09 05:29:41.529350 | orchestrator | Thursday 09 April 2026 05:28:35 +0000 (0:00:01.175) 0:17:36.889 ******** 2026-04-09 05:29:41.529361 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:29:41.529372 | orchestrator | 2026-04-09 05:29:41.529383 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-09 05:29:41.529394 | orchestrator | Thursday 09 April 2026 05:28:36 +0000 (0:00:01.169) 0:17:38.059 ******** 2026-04-09 05:29:41.529405 | orchestrator | changed: [testbed-node-2] 2026-04-09 05:29:41.529416 | orchestrator | 2026-04-09 05:29:41.529427 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-09 05:29:41.529438 | orchestrator | Thursday 09 April 2026 05:28:38 +0000 (0:00:02.003) 0:17:40.063 ******** 2026-04-09 05:29:41.529449 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:29:41.529460 | orchestrator | 2026-04-09 05:29:41.529471 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-09 05:29:41.529482 | orchestrator | Thursday 09 April 2026 05:28:39 +0000 (0:00:01.607) 0:17:41.671 ******** 2026-04-09 05:29:41.529493 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:29:41.529504 | orchestrator | 2026-04-09 05:29:41.529514 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-09 05:29:41.529525 | orchestrator | Thursday 09 April 2026 05:28:41 +0000 (0:00:01.504) 0:17:43.175 ******** 2026-04-09 05:29:41.529536 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:29:41.529547 | orchestrator | 2026-04-09 05:29:41.529558 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-09 05:29:41.529568 | orchestrator | Thursday 09 April 2026 05:28:42 +0000 (0:00:01.500) 0:17:44.676 ******** 2026-04-09 05:29:41.529579 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-09 05:29:41.529603 | orchestrator | 2026-04-09 05:29:41.529614 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-09 05:29:41.529625 | orchestrator | Thursday 09 April 2026 05:28:44 +0000 (0:00:01.593) 0:17:46.269 ******** 2026-04-09 05:29:41.529636 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-09 05:29:41.529647 | orchestrator | 2026-04-09 05:29:41.529696 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-09 05:29:41.529708 | orchestrator | Thursday 09 April 2026 05:28:45 +0000 (0:00:01.562) 0:17:47.832 ******** 2026-04-09 05:29:41.529718 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 05:29:41.529729 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 05:29:41.529740 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-09 05:29:41.529752 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-09 05:29:41.529763 | orchestrator | 2026-04-09 05:29:41.529796 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-09 05:29:41.529808 | orchestrator | Thursday 09 April 2026 05:28:49 +0000 (0:00:03.921) 0:17:51.754 ******** 2026-04-09 05:29:41.529820 | orchestrator | changed: [testbed-node-2] 2026-04-09 05:29:41.529831 | orchestrator | 2026-04-09 05:29:41.529851 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-09 05:29:41.529862 | orchestrator | Thursday 09 April 2026 05:28:51 +0000 (0:00:02.009) 0:17:53.763 ******** 2026-04-09 05:29:41.529873 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:29:41.529884 | orchestrator | 2026-04-09 05:29:41.529895 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-09 05:29:41.529906 | orchestrator | Thursday 09 April 2026 05:28:53 +0000 (0:00:01.167) 0:17:54.931 ******** 2026-04-09 05:29:41.529924 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:29:41.529943 | orchestrator | 2026-04-09 05:29:41.529962 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-09 05:29:41.529981 | orchestrator | Thursday 09 April 2026 05:28:54 +0000 (0:00:01.175) 0:17:56.106 ******** 2026-04-09 05:29:41.530000 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:29:41.530096 | orchestrator | 2026-04-09 05:29:41.530114 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-09 05:29:41.530125 | orchestrator | Thursday 09 April 2026 05:28:55 +0000 (0:00:01.743) 0:17:57.850 ******** 2026-04-09 05:29:41.530136 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:29:41.530147 | orchestrator | 2026-04-09 05:29:41.530158 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-09 05:29:41.530169 | orchestrator | Thursday 09 April 2026 05:28:57 +0000 (0:00:01.499) 0:17:59.350 ******** 2026-04-09 05:29:41.530180 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:29:41.530190 | orchestrator | 2026-04-09 05:29:41.530201 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-09 05:29:41.530212 | orchestrator | Thursday 09 April 2026 05:28:58 +0000 (0:00:00.811) 0:18:00.161 ******** 2026-04-09 05:29:41.530223 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-2 2026-04-09 05:29:41.530234 | orchestrator | 2026-04-09 05:29:41.530245 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-09 05:29:41.530256 | orchestrator | Thursday 09 April 2026 05:28:59 +0000 (0:00:01.154) 0:18:01.316 ******** 2026-04-09 05:29:41.530266 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:29:41.530277 | orchestrator | 2026-04-09 05:29:41.530288 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-09 05:29:41.530299 | orchestrator | Thursday 09 April 2026 05:29:00 +0000 (0:00:01.141) 0:18:02.458 ******** 2026-04-09 05:29:41.530309 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:29:41.530320 | orchestrator | 2026-04-09 05:29:41.530331 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-09 05:29:41.530342 | orchestrator | Thursday 09 April 2026 05:29:01 +0000 (0:00:01.094) 0:18:03.553 ******** 2026-04-09 05:29:41.530363 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-2 2026-04-09 05:29:41.530374 | orchestrator | 2026-04-09 05:29:41.530386 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-09 05:29:41.530396 | orchestrator | Thursday 09 April 2026 05:29:02 +0000 (0:00:01.129) 0:18:04.683 ******** 2026-04-09 05:29:41.530407 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:29:41.530418 | orchestrator | 2026-04-09 05:29:41.530429 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-09 05:29:41.530440 | orchestrator | Thursday 09 April 2026 05:29:05 +0000 (0:00:02.261) 0:18:06.944 ******** 2026-04-09 05:29:41.530451 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:29:41.530462 | orchestrator | 2026-04-09 05:29:41.530473 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-09 05:29:41.530483 | orchestrator | Thursday 09 April 2026 05:29:07 +0000 (0:00:01.963) 0:18:08.907 ******** 2026-04-09 05:29:41.530494 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:29:41.530505 | orchestrator | 2026-04-09 05:29:41.530516 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-09 05:29:41.530527 | orchestrator | Thursday 09 April 2026 05:29:09 +0000 (0:00:02.415) 0:18:11.322 ******** 2026-04-09 05:29:41.530537 | orchestrator | changed: [testbed-node-2] 2026-04-09 05:29:41.530548 | orchestrator | 2026-04-09 05:29:41.530559 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-09 05:29:41.530570 | orchestrator | Thursday 09 April 2026 05:29:12 +0000 (0:00:02.863) 0:18:14.186 ******** 2026-04-09 05:29:41.530581 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-2 2026-04-09 05:29:41.530592 | orchestrator | 2026-04-09 05:29:41.530603 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-09 05:29:41.530614 | orchestrator | Thursday 09 April 2026 05:29:13 +0000 (0:00:01.228) 0:18:15.415 ******** 2026-04-09 05:29:41.530625 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-09 05:29:41.530636 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:29:41.530647 | orchestrator | 2026-04-09 05:29:41.530690 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-09 05:29:41.530702 | orchestrator | Thursday 09 April 2026 05:29:36 +0000 (0:00:22.865) 0:18:38.280 ******** 2026-04-09 05:29:41.530713 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:29:41.530724 | orchestrator | 2026-04-09 05:29:41.530735 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-09 05:29:41.530746 | orchestrator | Thursday 09 April 2026 05:29:38 +0000 (0:00:02.571) 0:18:40.852 ******** 2026-04-09 05:29:41.530756 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:29:41.530767 | orchestrator | 2026-04-09 05:29:41.530778 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-09 05:29:41.530789 | orchestrator | Thursday 09 April 2026 05:29:39 +0000 (0:00:00.803) 0:18:41.655 ******** 2026-04-09 05:29:41.530822 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-09 05:30:24.078856 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-09 05:30:24.078998 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-09 05:30:24.079049 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-09 05:30:24.079068 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-09 05:30:24.079087 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__1ffd6f0abde244a54452dfaf8795d9ee7a0f516d'}])  2026-04-09 05:30:24.079105 | orchestrator | 2026-04-09 05:30:24.079122 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-04-09 05:30:24.079140 | orchestrator | Thursday 09 April 2026 05:29:49 +0000 (0:00:09.306) 0:18:50.962 ******** 2026-04-09 05:30:24.079156 | orchestrator | changed: [testbed-node-2] 2026-04-09 05:30:24.079173 | orchestrator | 2026-04-09 05:30:24.079188 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 05:30:24.079203 | orchestrator | Thursday 09 April 2026 05:29:51 +0000 (0:00:02.122) 0:18:53.085 ******** 2026-04-09 05:30:24.079216 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:30:24.079230 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-04-09 05:30:24.079243 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-04-09 05:30:24.079257 | orchestrator | 2026-04-09 05:30:24.079271 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 05:30:24.079285 | orchestrator | Thursday 09 April 2026 05:29:53 +0000 (0:00:01.820) 0:18:54.905 ******** 2026-04-09 05:30:24.079297 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-09 05:30:24.079311 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-09 05:30:24.079324 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-09 05:30:24.079339 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:30:24.079351 | orchestrator | 2026-04-09 05:30:24.079365 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-04-09 05:30:24.079377 | orchestrator | Thursday 09 April 2026 05:29:54 +0000 (0:00:01.005) 0:18:55.910 ******** 2026-04-09 05:30:24.079391 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:30:24.079404 | orchestrator | 2026-04-09 05:30:24.079417 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-04-09 05:30:24.079430 | orchestrator | Thursday 09 April 2026 05:29:54 +0000 (0:00:00.763) 0:18:56.673 ******** 2026-04-09 05:30:24.079438 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:30:24.079447 | orchestrator | 2026-04-09 05:30:24.079455 | orchestrator | PLAY [Reset mon_host] ********************************************************** 2026-04-09 05:30:24.079463 | orchestrator | 2026-04-09 05:30:24.079471 | orchestrator | TASK [Reset mon_host fact] ***************************************************** 2026-04-09 05:30:24.079479 | orchestrator | Thursday 09 April 2026 05:29:58 +0000 (0:00:03.342) 0:19:00.016 ******** 2026-04-09 05:30:24.079487 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:30:24.079495 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:30:24.079512 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:30:24.079520 | orchestrator | 2026-04-09 05:30:24.079528 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-04-09 05:30:24.079536 | orchestrator | 2026-04-09 05:30:24.079544 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-09 05:30:24.079551 | orchestrator | Thursday 09 April 2026 05:29:59 +0000 (0:00:01.600) 0:19:01.617 ******** 2026-04-09 05:30:24.079559 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:30:24.079567 | orchestrator | 2026-04-09 05:30:24.079580 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 05:30:24.079627 | orchestrator | Thursday 09 April 2026 05:30:00 +0000 (0:00:01.165) 0:19:02.783 ******** 2026-04-09 05:30:24.079670 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:30:24.079685 | orchestrator | 2026-04-09 05:30:24.079700 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 05:30:24.079709 | orchestrator | Thursday 09 April 2026 05:30:02 +0000 (0:00:01.262) 0:19:04.045 ******** 2026-04-09 05:30:24.079717 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:30:24.079725 | orchestrator | 2026-04-09 05:30:24.079733 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 05:30:24.079741 | orchestrator | Thursday 09 April 2026 05:30:03 +0000 (0:00:01.147) 0:19:05.193 ******** 2026-04-09 05:30:24.079749 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:30:24.079757 | orchestrator | 2026-04-09 05:30:24.079765 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 05:30:24.079773 | orchestrator | Thursday 09 April 2026 05:30:04 +0000 (0:00:01.156) 0:19:06.350 ******** 2026-04-09 05:30:24.079781 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:30:24.079789 | orchestrator | 2026-04-09 05:30:24.079797 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 05:30:24.079805 | orchestrator | Thursday 09 April 2026 05:30:05 +0000 (0:00:01.162) 0:19:07.512 ******** 2026-04-09 05:30:24.079813 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:30:24.079821 | orchestrator | 2026-04-09 05:30:24.079829 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 05:30:24.079837 | orchestrator | Thursday 09 April 2026 05:30:06 +0000 (0:00:01.109) 0:19:08.622 ******** 2026-04-09 05:30:24.079845 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:30:24.079853 | orchestrator | 2026-04-09 05:30:24.079861 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 05:30:24.079869 | orchestrator | Thursday 09 April 2026 05:30:07 +0000 (0:00:01.111) 0:19:09.734 ******** 2026-04-09 05:30:24.079877 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:30:24.079885 | orchestrator | 2026-04-09 05:30:24.079893 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 05:30:24.079902 | orchestrator | Thursday 09 April 2026 05:30:09 +0000 (0:00:01.140) 0:19:10.875 ******** 2026-04-09 05:30:24.079915 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:30:24.079928 | orchestrator | 2026-04-09 05:30:24.079939 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 05:30:24.079952 | orchestrator | Thursday 09 April 2026 05:30:10 +0000 (0:00:01.138) 0:19:12.013 ******** 2026-04-09 05:30:24.079965 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:30:24.079975 | orchestrator | 2026-04-09 05:30:24.079982 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 05:30:24.079991 | orchestrator | Thursday 09 April 2026 05:30:11 +0000 (0:00:01.160) 0:19:13.174 ******** 2026-04-09 05:30:24.079999 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:30:24.080007 | orchestrator | 2026-04-09 05:30:24.080015 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 05:30:24.080023 | orchestrator | Thursday 09 April 2026 05:30:12 +0000 (0:00:01.240) 0:19:14.415 ******** 2026-04-09 05:30:24.080031 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:30:24.080039 | orchestrator | 2026-04-09 05:30:24.080047 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-09 05:30:24.080062 | orchestrator | Thursday 09 April 2026 05:30:13 +0000 (0:00:01.113) 0:19:15.528 ******** 2026-04-09 05:30:24.080070 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:30:24.080078 | orchestrator | 2026-04-09 05:30:24.080086 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-09 05:30:24.080094 | orchestrator | Thursday 09 April 2026 05:30:14 +0000 (0:00:01.182) 0:19:16.711 ******** 2026-04-09 05:30:24.080102 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:30:24.080110 | orchestrator | 2026-04-09 05:30:24.080118 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-09 05:30:24.080126 | orchestrator | Thursday 09 April 2026 05:30:15 +0000 (0:00:01.138) 0:19:17.850 ******** 2026-04-09 05:30:24.080134 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:30:24.080142 | orchestrator | 2026-04-09 05:30:24.080150 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-09 05:30:24.080158 | orchestrator | Thursday 09 April 2026 05:30:17 +0000 (0:00:01.168) 0:19:19.019 ******** 2026-04-09 05:30:24.080166 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:30:24.080174 | orchestrator | 2026-04-09 05:30:24.080182 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-09 05:30:24.080190 | orchestrator | Thursday 09 April 2026 05:30:18 +0000 (0:00:01.194) 0:19:20.213 ******** 2026-04-09 05:30:24.080198 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:30:24.080206 | orchestrator | 2026-04-09 05:30:24.080214 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-09 05:30:24.080222 | orchestrator | Thursday 09 April 2026 05:30:19 +0000 (0:00:01.147) 0:19:21.361 ******** 2026-04-09 05:30:24.080230 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:30:24.080238 | orchestrator | 2026-04-09 05:30:24.080246 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-09 05:30:24.080254 | orchestrator | Thursday 09 April 2026 05:30:20 +0000 (0:00:01.143) 0:19:22.504 ******** 2026-04-09 05:30:24.080262 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:30:24.080270 | orchestrator | 2026-04-09 05:30:24.080278 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-09 05:30:24.080287 | orchestrator | Thursday 09 April 2026 05:30:21 +0000 (0:00:01.124) 0:19:23.629 ******** 2026-04-09 05:30:24.080294 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:30:24.080302 | orchestrator | 2026-04-09 05:30:24.080310 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-09 05:30:24.080319 | orchestrator | Thursday 09 April 2026 05:30:22 +0000 (0:00:01.139) 0:19:24.768 ******** 2026-04-09 05:30:24.080327 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:30:24.080334 | orchestrator | 2026-04-09 05:30:24.080342 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-09 05:30:24.080356 | orchestrator | Thursday 09 April 2026 05:30:24 +0000 (0:00:01.115) 0:19:25.883 ******** 2026-04-09 05:30:24.080370 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.528053 | orchestrator | 2026-04-09 05:31:08.528174 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-09 05:31:08.528193 | orchestrator | Thursday 09 April 2026 05:30:25 +0000 (0:00:01.128) 0:19:27.012 ******** 2026-04-09 05:31:08.528206 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.528220 | orchestrator | 2026-04-09 05:31:08.528232 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-09 05:31:08.528244 | orchestrator | Thursday 09 April 2026 05:30:26 +0000 (0:00:01.144) 0:19:28.157 ******** 2026-04-09 05:31:08.528255 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.528266 | orchestrator | 2026-04-09 05:31:08.528278 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-09 05:31:08.528289 | orchestrator | Thursday 09 April 2026 05:30:27 +0000 (0:00:01.182) 0:19:29.340 ******** 2026-04-09 05:31:08.528300 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.528311 | orchestrator | 2026-04-09 05:31:08.528346 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-09 05:31:08.528359 | orchestrator | Thursday 09 April 2026 05:30:28 +0000 (0:00:01.155) 0:19:30.495 ******** 2026-04-09 05:31:08.528370 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.528381 | orchestrator | 2026-04-09 05:31:08.528392 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-09 05:31:08.528403 | orchestrator | Thursday 09 April 2026 05:30:29 +0000 (0:00:01.157) 0:19:31.653 ******** 2026-04-09 05:31:08.528414 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.528425 | orchestrator | 2026-04-09 05:31:08.528436 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-09 05:31:08.528447 | orchestrator | Thursday 09 April 2026 05:30:30 +0000 (0:00:01.160) 0:19:32.814 ******** 2026-04-09 05:31:08.528458 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.528468 | orchestrator | 2026-04-09 05:31:08.528480 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-09 05:31:08.528491 | orchestrator | Thursday 09 April 2026 05:30:32 +0000 (0:00:01.132) 0:19:33.947 ******** 2026-04-09 05:31:08.528502 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.528513 | orchestrator | 2026-04-09 05:31:08.528524 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-09 05:31:08.528535 | orchestrator | Thursday 09 April 2026 05:30:33 +0000 (0:00:01.117) 0:19:35.064 ******** 2026-04-09 05:31:08.528546 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.528557 | orchestrator | 2026-04-09 05:31:08.528568 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-09 05:31:08.528580 | orchestrator | Thursday 09 April 2026 05:30:34 +0000 (0:00:01.220) 0:19:36.285 ******** 2026-04-09 05:31:08.528591 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.528604 | orchestrator | 2026-04-09 05:31:08.528617 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-09 05:31:08.528656 | orchestrator | Thursday 09 April 2026 05:30:35 +0000 (0:00:01.162) 0:19:37.447 ******** 2026-04-09 05:31:08.528670 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.528683 | orchestrator | 2026-04-09 05:31:08.528697 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-09 05:31:08.528710 | orchestrator | Thursday 09 April 2026 05:30:36 +0000 (0:00:01.122) 0:19:38.570 ******** 2026-04-09 05:31:08.528721 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.528732 | orchestrator | 2026-04-09 05:31:08.528743 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-09 05:31:08.528754 | orchestrator | Thursday 09 April 2026 05:30:37 +0000 (0:00:01.101) 0:19:39.672 ******** 2026-04-09 05:31:08.528765 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.528776 | orchestrator | 2026-04-09 05:31:08.528787 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-09 05:31:08.528798 | orchestrator | Thursday 09 April 2026 05:30:38 +0000 (0:00:01.141) 0:19:40.813 ******** 2026-04-09 05:31:08.528809 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.528820 | orchestrator | 2026-04-09 05:31:08.528831 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-09 05:31:08.528842 | orchestrator | Thursday 09 April 2026 05:30:40 +0000 (0:00:01.189) 0:19:42.002 ******** 2026-04-09 05:31:08.528853 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.528864 | orchestrator | 2026-04-09 05:31:08.528875 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-09 05:31:08.528886 | orchestrator | Thursday 09 April 2026 05:30:41 +0000 (0:00:01.181) 0:19:43.184 ******** 2026-04-09 05:31:08.528897 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.528908 | orchestrator | 2026-04-09 05:31:08.528919 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-09 05:31:08.528930 | orchestrator | Thursday 09 April 2026 05:30:42 +0000 (0:00:01.133) 0:19:44.317 ******** 2026-04-09 05:31:08.528941 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.528960 | orchestrator | 2026-04-09 05:31:08.528971 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-09 05:31:08.528982 | orchestrator | Thursday 09 April 2026 05:30:43 +0000 (0:00:01.203) 0:19:45.520 ******** 2026-04-09 05:31:08.528993 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.529004 | orchestrator | 2026-04-09 05:31:08.529015 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-09 05:31:08.529028 | orchestrator | Thursday 09 April 2026 05:30:44 +0000 (0:00:01.137) 0:19:46.658 ******** 2026-04-09 05:31:08.529038 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.529049 | orchestrator | 2026-04-09 05:31:08.529060 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-09 05:31:08.529071 | orchestrator | Thursday 09 April 2026 05:30:45 +0000 (0:00:01.156) 0:19:47.815 ******** 2026-04-09 05:31:08.529082 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.529093 | orchestrator | 2026-04-09 05:31:08.529103 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-09 05:31:08.529130 | orchestrator | Thursday 09 April 2026 05:30:47 +0000 (0:00:01.178) 0:19:48.994 ******** 2026-04-09 05:31:08.529160 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.529172 | orchestrator | 2026-04-09 05:31:08.529183 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-09 05:31:08.529194 | orchestrator | Thursday 09 April 2026 05:30:48 +0000 (0:00:01.099) 0:19:50.093 ******** 2026-04-09 05:31:08.529205 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.529216 | orchestrator | 2026-04-09 05:31:08.529227 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-09 05:31:08.529238 | orchestrator | Thursday 09 April 2026 05:30:49 +0000 (0:00:01.118) 0:19:51.212 ******** 2026-04-09 05:31:08.529249 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.529260 | orchestrator | 2026-04-09 05:31:08.529272 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-09 05:31:08.529283 | orchestrator | Thursday 09 April 2026 05:30:50 +0000 (0:00:01.124) 0:19:52.337 ******** 2026-04-09 05:31:08.529294 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.529305 | orchestrator | 2026-04-09 05:31:08.529316 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-09 05:31:08.529327 | orchestrator | Thursday 09 April 2026 05:30:51 +0000 (0:00:01.137) 0:19:53.474 ******** 2026-04-09 05:31:08.529338 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.529349 | orchestrator | 2026-04-09 05:31:08.529360 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-09 05:31:08.529371 | orchestrator | Thursday 09 April 2026 05:30:52 +0000 (0:00:01.233) 0:19:54.708 ******** 2026-04-09 05:31:08.529382 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.529393 | orchestrator | 2026-04-09 05:31:08.529404 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-09 05:31:08.529416 | orchestrator | Thursday 09 April 2026 05:30:53 +0000 (0:00:01.109) 0:19:55.818 ******** 2026-04-09 05:31:08.529426 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.529438 | orchestrator | 2026-04-09 05:31:08.529449 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-09 05:31:08.529460 | orchestrator | Thursday 09 April 2026 05:30:55 +0000 (0:00:01.247) 0:19:57.065 ******** 2026-04-09 05:31:08.529471 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.529482 | orchestrator | 2026-04-09 05:31:08.529493 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-09 05:31:08.529504 | orchestrator | Thursday 09 April 2026 05:30:56 +0000 (0:00:01.119) 0:19:58.185 ******** 2026-04-09 05:31:08.529515 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.529526 | orchestrator | 2026-04-09 05:31:08.529538 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 05:31:08.529557 | orchestrator | Thursday 09 April 2026 05:30:57 +0000 (0:00:01.144) 0:19:59.329 ******** 2026-04-09 05:31:08.529568 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.529579 | orchestrator | 2026-04-09 05:31:08.529590 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 05:31:08.529601 | orchestrator | Thursday 09 April 2026 05:30:58 +0000 (0:00:01.137) 0:20:00.467 ******** 2026-04-09 05:31:08.529613 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.529645 | orchestrator | 2026-04-09 05:31:08.529658 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 05:31:08.529669 | orchestrator | Thursday 09 April 2026 05:30:59 +0000 (0:00:01.141) 0:20:01.608 ******** 2026-04-09 05:31:08.529680 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.529691 | orchestrator | 2026-04-09 05:31:08.529702 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 05:31:08.529713 | orchestrator | Thursday 09 April 2026 05:31:00 +0000 (0:00:01.209) 0:20:02.818 ******** 2026-04-09 05:31:08.529724 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.529735 | orchestrator | 2026-04-09 05:31:08.529745 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 05:31:08.529756 | orchestrator | Thursday 09 April 2026 05:31:02 +0000 (0:00:01.187) 0:20:04.006 ******** 2026-04-09 05:31:08.529767 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-09 05:31:08.529779 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-09 05:31:08.529789 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-09 05:31:08.529800 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.529811 | orchestrator | 2026-04-09 05:31:08.529822 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 05:31:08.529833 | orchestrator | Thursday 09 April 2026 05:31:03 +0000 (0:00:01.723) 0:20:05.729 ******** 2026-04-09 05:31:08.529844 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-09 05:31:08.529855 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-09 05:31:08.529866 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-09 05:31:08.529877 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.529888 | orchestrator | 2026-04-09 05:31:08.529899 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 05:31:08.529910 | orchestrator | Thursday 09 April 2026 05:31:05 +0000 (0:00:01.832) 0:20:07.562 ******** 2026-04-09 05:31:08.529921 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-09 05:31:08.529932 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-09 05:31:08.529943 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-09 05:31:08.529953 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.529964 | orchestrator | 2026-04-09 05:31:08.529975 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 05:31:08.529986 | orchestrator | Thursday 09 April 2026 05:31:07 +0000 (0:00:01.539) 0:20:09.101 ******** 2026-04-09 05:31:08.529997 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:08.530008 | orchestrator | 2026-04-09 05:31:08.530083 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 05:31:08.530095 | orchestrator | Thursday 09 April 2026 05:31:08 +0000 (0:00:01.125) 0:20:10.227 ******** 2026-04-09 05:31:08.530113 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-09 05:31:08.530133 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:42.649486 | orchestrator | 2026-04-09 05:31:42.649599 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-09 05:31:42.649675 | orchestrator | Thursday 09 April 2026 05:31:09 +0000 (0:00:01.291) 0:20:11.518 ******** 2026-04-09 05:31:42.649690 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:42.649703 | orchestrator | 2026-04-09 05:31:42.649714 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-09 05:31:42.649750 | orchestrator | Thursday 09 April 2026 05:31:10 +0000 (0:00:01.270) 0:20:12.789 ******** 2026-04-09 05:31:42.649762 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 05:31:42.649773 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 05:31:42.649784 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 05:31:42.649795 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:42.649806 | orchestrator | 2026-04-09 05:31:42.649818 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-09 05:31:42.649829 | orchestrator | Thursday 09 April 2026 05:31:12 +0000 (0:00:01.442) 0:20:14.232 ******** 2026-04-09 05:31:42.649841 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:42.649852 | orchestrator | 2026-04-09 05:31:42.649863 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-09 05:31:42.649874 | orchestrator | Thursday 09 April 2026 05:31:13 +0000 (0:00:01.135) 0:20:15.367 ******** 2026-04-09 05:31:42.649886 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:42.649896 | orchestrator | 2026-04-09 05:31:42.649908 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-09 05:31:42.649918 | orchestrator | Thursday 09 April 2026 05:31:14 +0000 (0:00:01.126) 0:20:16.494 ******** 2026-04-09 05:31:42.649929 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:42.649940 | orchestrator | 2026-04-09 05:31:42.649951 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-09 05:31:42.649962 | orchestrator | Thursday 09 April 2026 05:31:15 +0000 (0:00:01.148) 0:20:17.643 ******** 2026-04-09 05:31:42.649973 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:31:42.649984 | orchestrator | 2026-04-09 05:31:42.649995 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-04-09 05:31:42.650005 | orchestrator | 2026-04-09 05:31:42.650078 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-09 05:31:42.650093 | orchestrator | Thursday 09 April 2026 05:31:16 +0000 (0:00:00.980) 0:20:18.624 ******** 2026-04-09 05:31:42.650106 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.650120 | orchestrator | 2026-04-09 05:31:42.650132 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 05:31:42.650145 | orchestrator | Thursday 09 April 2026 05:31:17 +0000 (0:00:00.904) 0:20:19.528 ******** 2026-04-09 05:31:42.650158 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.650171 | orchestrator | 2026-04-09 05:31:42.650185 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 05:31:42.650199 | orchestrator | Thursday 09 April 2026 05:31:18 +0000 (0:00:00.785) 0:20:20.314 ******** 2026-04-09 05:31:42.650211 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.650224 | orchestrator | 2026-04-09 05:31:42.650237 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 05:31:42.650250 | orchestrator | Thursday 09 April 2026 05:31:19 +0000 (0:00:00.780) 0:20:21.094 ******** 2026-04-09 05:31:42.650263 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.650276 | orchestrator | 2026-04-09 05:31:42.650288 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 05:31:42.650301 | orchestrator | Thursday 09 April 2026 05:31:19 +0000 (0:00:00.768) 0:20:21.862 ******** 2026-04-09 05:31:42.650314 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.650326 | orchestrator | 2026-04-09 05:31:42.650339 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 05:31:42.650353 | orchestrator | Thursday 09 April 2026 05:31:20 +0000 (0:00:00.802) 0:20:22.664 ******** 2026-04-09 05:31:42.650365 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.650378 | orchestrator | 2026-04-09 05:31:42.650389 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 05:31:42.650400 | orchestrator | Thursday 09 April 2026 05:31:21 +0000 (0:00:00.790) 0:20:23.454 ******** 2026-04-09 05:31:42.650410 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.650421 | orchestrator | 2026-04-09 05:31:42.650441 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 05:31:42.650452 | orchestrator | Thursday 09 April 2026 05:31:22 +0000 (0:00:00.803) 0:20:24.258 ******** 2026-04-09 05:31:42.650463 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.650474 | orchestrator | 2026-04-09 05:31:42.650485 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 05:31:42.650495 | orchestrator | Thursday 09 April 2026 05:31:23 +0000 (0:00:00.771) 0:20:25.029 ******** 2026-04-09 05:31:42.650506 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.650517 | orchestrator | 2026-04-09 05:31:42.650528 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 05:31:42.650539 | orchestrator | Thursday 09 April 2026 05:31:23 +0000 (0:00:00.776) 0:20:25.806 ******** 2026-04-09 05:31:42.650550 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.650561 | orchestrator | 2026-04-09 05:31:42.650572 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 05:31:42.650583 | orchestrator | Thursday 09 April 2026 05:31:24 +0000 (0:00:00.784) 0:20:26.590 ******** 2026-04-09 05:31:42.650594 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.650605 | orchestrator | 2026-04-09 05:31:42.650631 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 05:31:42.650643 | orchestrator | Thursday 09 April 2026 05:31:25 +0000 (0:00:00.891) 0:20:27.482 ******** 2026-04-09 05:31:42.650654 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.650665 | orchestrator | 2026-04-09 05:31:42.650675 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-09 05:31:42.650701 | orchestrator | Thursday 09 April 2026 05:31:26 +0000 (0:00:00.927) 0:20:28.409 ******** 2026-04-09 05:31:42.650712 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.650724 | orchestrator | 2026-04-09 05:31:42.650752 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-09 05:31:42.650764 | orchestrator | Thursday 09 April 2026 05:31:27 +0000 (0:00:00.809) 0:20:29.218 ******** 2026-04-09 05:31:42.650775 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.650786 | orchestrator | 2026-04-09 05:31:42.650797 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-09 05:31:42.650808 | orchestrator | Thursday 09 April 2026 05:31:28 +0000 (0:00:00.788) 0:20:30.007 ******** 2026-04-09 05:31:42.650819 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.650830 | orchestrator | 2026-04-09 05:31:42.650841 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-09 05:31:42.650852 | orchestrator | Thursday 09 April 2026 05:31:28 +0000 (0:00:00.795) 0:20:30.803 ******** 2026-04-09 05:31:42.650863 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.650874 | orchestrator | 2026-04-09 05:31:42.650884 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-09 05:31:42.650896 | orchestrator | Thursday 09 April 2026 05:31:29 +0000 (0:00:00.809) 0:20:31.613 ******** 2026-04-09 05:31:42.650906 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.650917 | orchestrator | 2026-04-09 05:31:42.650928 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-09 05:31:42.650939 | orchestrator | Thursday 09 April 2026 05:31:30 +0000 (0:00:00.802) 0:20:32.415 ******** 2026-04-09 05:31:42.650950 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.650961 | orchestrator | 2026-04-09 05:31:42.650972 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-09 05:31:42.650983 | orchestrator | Thursday 09 April 2026 05:31:31 +0000 (0:00:00.762) 0:20:33.177 ******** 2026-04-09 05:31:42.650994 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.651005 | orchestrator | 2026-04-09 05:31:42.651016 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-09 05:31:42.651028 | orchestrator | Thursday 09 April 2026 05:31:32 +0000 (0:00:00.797) 0:20:33.975 ******** 2026-04-09 05:31:42.651039 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.651104 | orchestrator | 2026-04-09 05:31:42.651118 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-09 05:31:42.651129 | orchestrator | Thursday 09 April 2026 05:31:32 +0000 (0:00:00.831) 0:20:34.806 ******** 2026-04-09 05:31:42.651140 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.651150 | orchestrator | 2026-04-09 05:31:42.651161 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-09 05:31:42.651172 | orchestrator | Thursday 09 April 2026 05:31:33 +0000 (0:00:00.793) 0:20:35.600 ******** 2026-04-09 05:31:42.651183 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.651194 | orchestrator | 2026-04-09 05:31:42.651205 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-09 05:31:42.651216 | orchestrator | Thursday 09 April 2026 05:31:34 +0000 (0:00:00.785) 0:20:36.385 ******** 2026-04-09 05:31:42.651227 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.651238 | orchestrator | 2026-04-09 05:31:42.651249 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-09 05:31:42.651260 | orchestrator | Thursday 09 April 2026 05:31:35 +0000 (0:00:00.773) 0:20:37.159 ******** 2026-04-09 05:31:42.651271 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.651281 | orchestrator | 2026-04-09 05:31:42.651292 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-09 05:31:42.651303 | orchestrator | Thursday 09 April 2026 05:31:36 +0000 (0:00:00.885) 0:20:38.045 ******** 2026-04-09 05:31:42.651314 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.651325 | orchestrator | 2026-04-09 05:31:42.651336 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-09 05:31:42.651347 | orchestrator | Thursday 09 April 2026 05:31:36 +0000 (0:00:00.786) 0:20:38.831 ******** 2026-04-09 05:31:42.651358 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.651369 | orchestrator | 2026-04-09 05:31:42.651380 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-09 05:31:42.651391 | orchestrator | Thursday 09 April 2026 05:31:37 +0000 (0:00:00.773) 0:20:39.605 ******** 2026-04-09 05:31:42.651402 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.651413 | orchestrator | 2026-04-09 05:31:42.651424 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-09 05:31:42.651435 | orchestrator | Thursday 09 April 2026 05:31:38 +0000 (0:00:00.772) 0:20:40.377 ******** 2026-04-09 05:31:42.651446 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.651457 | orchestrator | 2026-04-09 05:31:42.651468 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-09 05:31:42.651479 | orchestrator | Thursday 09 April 2026 05:31:39 +0000 (0:00:00.791) 0:20:41.169 ******** 2026-04-09 05:31:42.651489 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.651500 | orchestrator | 2026-04-09 05:31:42.651511 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-09 05:31:42.651522 | orchestrator | Thursday 09 April 2026 05:31:40 +0000 (0:00:00.778) 0:20:41.948 ******** 2026-04-09 05:31:42.651533 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.651544 | orchestrator | 2026-04-09 05:31:42.651555 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-09 05:31:42.651566 | orchestrator | Thursday 09 April 2026 05:31:40 +0000 (0:00:00.809) 0:20:42.757 ******** 2026-04-09 05:31:42.651577 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.651588 | orchestrator | 2026-04-09 05:31:42.651599 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-09 05:31:42.651610 | orchestrator | Thursday 09 April 2026 05:31:41 +0000 (0:00:00.795) 0:20:43.553 ******** 2026-04-09 05:31:42.651637 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.651649 | orchestrator | 2026-04-09 05:31:42.651659 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-09 05:31:42.651670 | orchestrator | Thursday 09 April 2026 05:31:42 +0000 (0:00:00.810) 0:20:44.364 ******** 2026-04-09 05:31:42.651688 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:31:42.651707 | orchestrator | 2026-04-09 05:31:42.651726 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-09 05:32:13.242390 | orchestrator | Thursday 09 April 2026 05:31:43 +0000 (0:00:00.801) 0:20:45.166 ******** 2026-04-09 05:32:13.242501 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.242519 | orchestrator | 2026-04-09 05:32:13.242533 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-09 05:32:13.242545 | orchestrator | Thursday 09 April 2026 05:31:44 +0000 (0:00:00.791) 0:20:45.958 ******** 2026-04-09 05:32:13.242556 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.242567 | orchestrator | 2026-04-09 05:32:13.242579 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-09 05:32:13.242590 | orchestrator | Thursday 09 April 2026 05:31:44 +0000 (0:00:00.810) 0:20:46.768 ******** 2026-04-09 05:32:13.242601 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.242656 | orchestrator | 2026-04-09 05:32:13.242669 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-09 05:32:13.242680 | orchestrator | Thursday 09 April 2026 05:31:45 +0000 (0:00:00.796) 0:20:47.565 ******** 2026-04-09 05:32:13.242691 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.242702 | orchestrator | 2026-04-09 05:32:13.242714 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-09 05:32:13.242725 | orchestrator | Thursday 09 April 2026 05:31:46 +0000 (0:00:00.875) 0:20:48.441 ******** 2026-04-09 05:32:13.242736 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.242747 | orchestrator | 2026-04-09 05:32:13.242759 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-09 05:32:13.242771 | orchestrator | Thursday 09 April 2026 05:31:47 +0000 (0:00:00.806) 0:20:49.247 ******** 2026-04-09 05:32:13.242782 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.242793 | orchestrator | 2026-04-09 05:32:13.242804 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-09 05:32:13.242817 | orchestrator | Thursday 09 April 2026 05:31:48 +0000 (0:00:00.746) 0:20:49.993 ******** 2026-04-09 05:32:13.242828 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.242839 | orchestrator | 2026-04-09 05:32:13.242850 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-09 05:32:13.242862 | orchestrator | Thursday 09 April 2026 05:31:48 +0000 (0:00:00.792) 0:20:50.786 ******** 2026-04-09 05:32:13.242873 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.242884 | orchestrator | 2026-04-09 05:32:13.242895 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-09 05:32:13.242907 | orchestrator | Thursday 09 April 2026 05:31:49 +0000 (0:00:00.761) 0:20:51.548 ******** 2026-04-09 05:32:13.242918 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.242931 | orchestrator | 2026-04-09 05:32:13.242944 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-09 05:32:13.242957 | orchestrator | Thursday 09 April 2026 05:31:50 +0000 (0:00:00.784) 0:20:52.332 ******** 2026-04-09 05:32:13.242970 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.242983 | orchestrator | 2026-04-09 05:32:13.242995 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-09 05:32:13.243008 | orchestrator | Thursday 09 April 2026 05:31:51 +0000 (0:00:00.792) 0:20:53.124 ******** 2026-04-09 05:32:13.243021 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.243035 | orchestrator | 2026-04-09 05:32:13.243048 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-09 05:32:13.243060 | orchestrator | Thursday 09 April 2026 05:31:52 +0000 (0:00:00.764) 0:20:53.889 ******** 2026-04-09 05:32:13.243073 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.243086 | orchestrator | 2026-04-09 05:32:13.243099 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-09 05:32:13.243139 | orchestrator | Thursday 09 April 2026 05:31:52 +0000 (0:00:00.780) 0:20:54.670 ******** 2026-04-09 05:32:13.243153 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.243165 | orchestrator | 2026-04-09 05:32:13.243178 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-09 05:32:13.243192 | orchestrator | Thursday 09 April 2026 05:31:53 +0000 (0:00:00.850) 0:20:55.520 ******** 2026-04-09 05:32:13.243204 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.243216 | orchestrator | 2026-04-09 05:32:13.243229 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-09 05:32:13.243242 | orchestrator | Thursday 09 April 2026 05:31:54 +0000 (0:00:00.819) 0:20:56.340 ******** 2026-04-09 05:32:13.243255 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.243267 | orchestrator | 2026-04-09 05:32:13.243281 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-09 05:32:13.243292 | orchestrator | Thursday 09 April 2026 05:31:55 +0000 (0:00:00.863) 0:20:57.204 ******** 2026-04-09 05:32:13.243303 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.243314 | orchestrator | 2026-04-09 05:32:13.243325 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-09 05:32:13.243336 | orchestrator | Thursday 09 April 2026 05:31:56 +0000 (0:00:00.848) 0:20:58.053 ******** 2026-04-09 05:32:13.243346 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.243357 | orchestrator | 2026-04-09 05:32:13.243369 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 05:32:13.243381 | orchestrator | Thursday 09 April 2026 05:31:56 +0000 (0:00:00.790) 0:20:58.843 ******** 2026-04-09 05:32:13.243392 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.243403 | orchestrator | 2026-04-09 05:32:13.243414 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 05:32:13.243425 | orchestrator | Thursday 09 April 2026 05:31:57 +0000 (0:00:00.770) 0:20:59.614 ******** 2026-04-09 05:32:13.243436 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.243446 | orchestrator | 2026-04-09 05:32:13.243458 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 05:32:13.243484 | orchestrator | Thursday 09 April 2026 05:31:58 +0000 (0:00:00.774) 0:21:00.388 ******** 2026-04-09 05:32:13.243495 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.243507 | orchestrator | 2026-04-09 05:32:13.243535 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 05:32:13.243547 | orchestrator | Thursday 09 April 2026 05:31:59 +0000 (0:00:00.790) 0:21:01.179 ******** 2026-04-09 05:32:13.243558 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.243569 | orchestrator | 2026-04-09 05:32:13.243580 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 05:32:13.243591 | orchestrator | Thursday 09 April 2026 05:32:00 +0000 (0:00:00.794) 0:21:01.973 ******** 2026-04-09 05:32:13.243602 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-09 05:32:13.243632 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-09 05:32:13.243643 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-09 05:32:13.243654 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.243665 | orchestrator | 2026-04-09 05:32:13.243676 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 05:32:13.243687 | orchestrator | Thursday 09 April 2026 05:32:01 +0000 (0:00:01.058) 0:21:03.032 ******** 2026-04-09 05:32:13.243698 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-09 05:32:13.243709 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-09 05:32:13.243720 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-09 05:32:13.243731 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.243742 | orchestrator | 2026-04-09 05:32:13.243753 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 05:32:13.243771 | orchestrator | Thursday 09 April 2026 05:32:02 +0000 (0:00:01.057) 0:21:04.090 ******** 2026-04-09 05:32:13.243782 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-09 05:32:13.243793 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-09 05:32:13.243804 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-09 05:32:13.243814 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.243825 | orchestrator | 2026-04-09 05:32:13.243836 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 05:32:13.243847 | orchestrator | Thursday 09 April 2026 05:32:03 +0000 (0:00:01.105) 0:21:05.195 ******** 2026-04-09 05:32:13.243858 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.243870 | orchestrator | 2026-04-09 05:32:13.243880 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 05:32:13.243891 | orchestrator | Thursday 09 April 2026 05:32:04 +0000 (0:00:00.793) 0:21:05.989 ******** 2026-04-09 05:32:13.243903 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-09 05:32:13.243914 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.243925 | orchestrator | 2026-04-09 05:32:13.243936 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-09 05:32:13.243946 | orchestrator | Thursday 09 April 2026 05:32:05 +0000 (0:00:00.929) 0:21:06.919 ******** 2026-04-09 05:32:13.243957 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.243968 | orchestrator | 2026-04-09 05:32:13.243979 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-09 05:32:13.243990 | orchestrator | Thursday 09 April 2026 05:32:05 +0000 (0:00:00.805) 0:21:07.724 ******** 2026-04-09 05:32:13.244001 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-09 05:32:13.244012 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-09 05:32:13.244022 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-09 05:32:13.244033 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.244044 | orchestrator | 2026-04-09 05:32:13.244055 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-09 05:32:13.244066 | orchestrator | Thursday 09 April 2026 05:32:07 +0000 (0:00:01.459) 0:21:09.184 ******** 2026-04-09 05:32:13.244077 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.244088 | orchestrator | 2026-04-09 05:32:13.244099 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-09 05:32:13.244110 | orchestrator | Thursday 09 April 2026 05:32:08 +0000 (0:00:00.780) 0:21:09.964 ******** 2026-04-09 05:32:13.244121 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.244132 | orchestrator | 2026-04-09 05:32:13.244143 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-09 05:32:13.244154 | orchestrator | Thursday 09 April 2026 05:32:08 +0000 (0:00:00.819) 0:21:10.784 ******** 2026-04-09 05:32:13.244164 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.244175 | orchestrator | 2026-04-09 05:32:13.244186 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-09 05:32:13.244197 | orchestrator | Thursday 09 April 2026 05:32:09 +0000 (0:00:00.788) 0:21:11.572 ******** 2026-04-09 05:32:13.244208 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:32:13.244219 | orchestrator | 2026-04-09 05:32:13.244230 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-04-09 05:32:13.244241 | orchestrator | 2026-04-09 05:32:13.244252 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-09 05:32:13.244262 | orchestrator | Thursday 09 April 2026 05:32:10 +0000 (0:00:01.076) 0:21:12.649 ******** 2026-04-09 05:32:13.244273 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:13.244284 | orchestrator | 2026-04-09 05:32:13.244295 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 05:32:13.244306 | orchestrator | Thursday 09 April 2026 05:32:11 +0000 (0:00:00.792) 0:21:13.441 ******** 2026-04-09 05:32:13.244317 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:13.244338 | orchestrator | 2026-04-09 05:32:13.244349 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 05:32:13.244360 | orchestrator | Thursday 09 April 2026 05:32:12 +0000 (0:00:00.838) 0:21:14.280 ******** 2026-04-09 05:32:13.244371 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:13.244382 | orchestrator | 2026-04-09 05:32:13.244393 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 05:32:13.244409 | orchestrator | Thursday 09 April 2026 05:32:13 +0000 (0:00:00.767) 0:21:15.048 ******** 2026-04-09 05:32:13.244428 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.260509 | orchestrator | 2026-04-09 05:32:45.260656 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 05:32:45.260673 | orchestrator | Thursday 09 April 2026 05:32:13 +0000 (0:00:00.783) 0:21:15.831 ******** 2026-04-09 05:32:45.260683 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.260693 | orchestrator | 2026-04-09 05:32:45.260701 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 05:32:45.260710 | orchestrator | Thursday 09 April 2026 05:32:14 +0000 (0:00:00.797) 0:21:16.628 ******** 2026-04-09 05:32:45.260718 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.260727 | orchestrator | 2026-04-09 05:32:45.260735 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 05:32:45.260743 | orchestrator | Thursday 09 April 2026 05:32:15 +0000 (0:00:00.817) 0:21:17.446 ******** 2026-04-09 05:32:45.260751 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.260760 | orchestrator | 2026-04-09 05:32:45.260768 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 05:32:45.260776 | orchestrator | Thursday 09 April 2026 05:32:16 +0000 (0:00:00.803) 0:21:18.249 ******** 2026-04-09 05:32:45.260784 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.260792 | orchestrator | 2026-04-09 05:32:45.260800 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 05:32:45.260808 | orchestrator | Thursday 09 April 2026 05:32:17 +0000 (0:00:00.777) 0:21:19.027 ******** 2026-04-09 05:32:45.260816 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.260825 | orchestrator | 2026-04-09 05:32:45.260833 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 05:32:45.260841 | orchestrator | Thursday 09 April 2026 05:32:17 +0000 (0:00:00.785) 0:21:19.812 ******** 2026-04-09 05:32:45.260849 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.260857 | orchestrator | 2026-04-09 05:32:45.260865 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 05:32:45.260873 | orchestrator | Thursday 09 April 2026 05:32:18 +0000 (0:00:00.758) 0:21:20.571 ******** 2026-04-09 05:32:45.260881 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.260889 | orchestrator | 2026-04-09 05:32:45.260897 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 05:32:45.260906 | orchestrator | Thursday 09 April 2026 05:32:19 +0000 (0:00:00.767) 0:21:21.339 ******** 2026-04-09 05:32:45.260915 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.260923 | orchestrator | 2026-04-09 05:32:45.260931 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-09 05:32:45.260939 | orchestrator | Thursday 09 April 2026 05:32:20 +0000 (0:00:00.779) 0:21:22.119 ******** 2026-04-09 05:32:45.260947 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.260955 | orchestrator | 2026-04-09 05:32:45.260963 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-09 05:32:45.260971 | orchestrator | Thursday 09 April 2026 05:32:21 +0000 (0:00:00.775) 0:21:22.895 ******** 2026-04-09 05:32:45.260979 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.260988 | orchestrator | 2026-04-09 05:32:45.260996 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-09 05:32:45.261004 | orchestrator | Thursday 09 April 2026 05:32:21 +0000 (0:00:00.756) 0:21:23.651 ******** 2026-04-09 05:32:45.261034 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261043 | orchestrator | 2026-04-09 05:32:45.261051 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-09 05:32:45.261060 | orchestrator | Thursday 09 April 2026 05:32:22 +0000 (0:00:00.808) 0:21:24.460 ******** 2026-04-09 05:32:45.261068 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261076 | orchestrator | 2026-04-09 05:32:45.261084 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-09 05:32:45.261092 | orchestrator | Thursday 09 April 2026 05:32:23 +0000 (0:00:00.780) 0:21:25.240 ******** 2026-04-09 05:32:45.261100 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261108 | orchestrator | 2026-04-09 05:32:45.261116 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-09 05:32:45.261124 | orchestrator | Thursday 09 April 2026 05:32:24 +0000 (0:00:00.768) 0:21:26.009 ******** 2026-04-09 05:32:45.261132 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261140 | orchestrator | 2026-04-09 05:32:45.261148 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-09 05:32:45.261156 | orchestrator | Thursday 09 April 2026 05:32:24 +0000 (0:00:00.823) 0:21:26.833 ******** 2026-04-09 05:32:45.261164 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261172 | orchestrator | 2026-04-09 05:32:45.261180 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-09 05:32:45.261189 | orchestrator | Thursday 09 April 2026 05:32:25 +0000 (0:00:00.796) 0:21:27.630 ******** 2026-04-09 05:32:45.261197 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261205 | orchestrator | 2026-04-09 05:32:45.261213 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-09 05:32:45.261221 | orchestrator | Thursday 09 April 2026 05:32:26 +0000 (0:00:00.776) 0:21:28.406 ******** 2026-04-09 05:32:45.261229 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261237 | orchestrator | 2026-04-09 05:32:45.261245 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-09 05:32:45.261324 | orchestrator | Thursday 09 April 2026 05:32:27 +0000 (0:00:00.796) 0:21:29.203 ******** 2026-04-09 05:32:45.261334 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261341 | orchestrator | 2026-04-09 05:32:45.261349 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-09 05:32:45.261357 | orchestrator | Thursday 09 April 2026 05:32:28 +0000 (0:00:00.798) 0:21:30.001 ******** 2026-04-09 05:32:45.261365 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261373 | orchestrator | 2026-04-09 05:32:45.261381 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-09 05:32:45.261403 | orchestrator | Thursday 09 April 2026 05:32:28 +0000 (0:00:00.774) 0:21:30.776 ******** 2026-04-09 05:32:45.261411 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261420 | orchestrator | 2026-04-09 05:32:45.261444 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-09 05:32:45.261453 | orchestrator | Thursday 09 April 2026 05:32:29 +0000 (0:00:00.850) 0:21:31.627 ******** 2026-04-09 05:32:45.261461 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261469 | orchestrator | 2026-04-09 05:32:45.261477 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-09 05:32:45.261485 | orchestrator | Thursday 09 April 2026 05:32:30 +0000 (0:00:00.772) 0:21:32.399 ******** 2026-04-09 05:32:45.261493 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261501 | orchestrator | 2026-04-09 05:32:45.261509 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-09 05:32:45.261517 | orchestrator | Thursday 09 April 2026 05:32:31 +0000 (0:00:00.849) 0:21:33.248 ******** 2026-04-09 05:32:45.261524 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261532 | orchestrator | 2026-04-09 05:32:45.261540 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-09 05:32:45.261548 | orchestrator | Thursday 09 April 2026 05:32:32 +0000 (0:00:00.776) 0:21:34.025 ******** 2026-04-09 05:32:45.261563 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261571 | orchestrator | 2026-04-09 05:32:45.261579 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-09 05:32:45.261587 | orchestrator | Thursday 09 April 2026 05:32:32 +0000 (0:00:00.779) 0:21:34.805 ******** 2026-04-09 05:32:45.261595 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261602 | orchestrator | 2026-04-09 05:32:45.261625 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-09 05:32:45.261633 | orchestrator | Thursday 09 April 2026 05:32:33 +0000 (0:00:00.799) 0:21:35.605 ******** 2026-04-09 05:32:45.261641 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261649 | orchestrator | 2026-04-09 05:32:45.261656 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-09 05:32:45.261664 | orchestrator | Thursday 09 April 2026 05:32:34 +0000 (0:00:00.827) 0:21:36.432 ******** 2026-04-09 05:32:45.261672 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261680 | orchestrator | 2026-04-09 05:32:45.261688 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-09 05:32:45.261696 | orchestrator | Thursday 09 April 2026 05:32:35 +0000 (0:00:00.831) 0:21:37.264 ******** 2026-04-09 05:32:45.261703 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261711 | orchestrator | 2026-04-09 05:32:45.261719 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-09 05:32:45.261727 | orchestrator | Thursday 09 April 2026 05:32:36 +0000 (0:00:00.760) 0:21:38.024 ******** 2026-04-09 05:32:45.261735 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261743 | orchestrator | 2026-04-09 05:32:45.261751 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-09 05:32:45.261759 | orchestrator | Thursday 09 April 2026 05:32:36 +0000 (0:00:00.778) 0:21:38.803 ******** 2026-04-09 05:32:45.261767 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261775 | orchestrator | 2026-04-09 05:32:45.261782 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-09 05:32:45.261790 | orchestrator | Thursday 09 April 2026 05:32:37 +0000 (0:00:00.802) 0:21:39.605 ******** 2026-04-09 05:32:45.261798 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261806 | orchestrator | 2026-04-09 05:32:45.261814 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-09 05:32:45.261821 | orchestrator | Thursday 09 April 2026 05:32:38 +0000 (0:00:00.788) 0:21:40.393 ******** 2026-04-09 05:32:45.261829 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261837 | orchestrator | 2026-04-09 05:32:45.261845 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-09 05:32:45.261853 | orchestrator | Thursday 09 April 2026 05:32:39 +0000 (0:00:00.787) 0:21:41.181 ******** 2026-04-09 05:32:45.261861 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261868 | orchestrator | 2026-04-09 05:32:45.261876 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-09 05:32:45.261884 | orchestrator | Thursday 09 April 2026 05:32:40 +0000 (0:00:00.805) 0:21:41.987 ******** 2026-04-09 05:32:45.261891 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261926 | orchestrator | 2026-04-09 05:32:45.261935 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-09 05:32:45.261943 | orchestrator | Thursday 09 April 2026 05:32:40 +0000 (0:00:00.780) 0:21:42.768 ******** 2026-04-09 05:32:45.261951 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261959 | orchestrator | 2026-04-09 05:32:45.261968 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-09 05:32:45.261976 | orchestrator | Thursday 09 April 2026 05:32:41 +0000 (0:00:00.800) 0:21:43.568 ******** 2026-04-09 05:32:45.261984 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.261992 | orchestrator | 2026-04-09 05:32:45.262000 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-09 05:32:45.262059 | orchestrator | Thursday 09 April 2026 05:32:42 +0000 (0:00:00.820) 0:21:44.388 ******** 2026-04-09 05:32:45.262069 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.262077 | orchestrator | 2026-04-09 05:32:45.262085 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-09 05:32:45.262094 | orchestrator | Thursday 09 April 2026 05:32:43 +0000 (0:00:00.805) 0:21:45.194 ******** 2026-04-09 05:32:45.262101 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.262109 | orchestrator | 2026-04-09 05:32:45.262117 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-09 05:32:45.262125 | orchestrator | Thursday 09 April 2026 05:32:44 +0000 (0:00:00.764) 0:21:45.958 ******** 2026-04-09 05:32:45.262133 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.262141 | orchestrator | 2026-04-09 05:32:45.262149 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-09 05:32:45.262162 | orchestrator | Thursday 09 April 2026 05:32:45 +0000 (0:00:00.984) 0:21:46.943 ******** 2026-04-09 05:32:45.262171 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:32:45.262179 | orchestrator | 2026-04-09 05:32:45.262193 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-09 05:33:36.660796 | orchestrator | Thursday 09 April 2026 05:32:45 +0000 (0:00:00.775) 0:21:47.718 ******** 2026-04-09 05:33:36.660914 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:33:36.660932 | orchestrator | 2026-04-09 05:33:36.660946 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-09 05:33:36.660958 | orchestrator | Thursday 09 April 2026 05:32:46 +0000 (0:00:00.758) 0:21:48.477 ******** 2026-04-09 05:33:36.660970 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:33:36.660981 | orchestrator | 2026-04-09 05:33:36.660993 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-09 05:33:36.661004 | orchestrator | Thursday 09 April 2026 05:32:47 +0000 (0:00:00.878) 0:21:49.356 ******** 2026-04-09 05:33:36.661015 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:33:36.661026 | orchestrator | 2026-04-09 05:33:36.661038 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-09 05:33:36.661049 | orchestrator | Thursday 09 April 2026 05:32:48 +0000 (0:00:00.794) 0:21:50.150 ******** 2026-04-09 05:33:36.661060 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:33:36.661071 | orchestrator | 2026-04-09 05:33:36.661082 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-09 05:33:36.661094 | orchestrator | Thursday 09 April 2026 05:32:49 +0000 (0:00:00.867) 0:21:51.017 ******** 2026-04-09 05:33:36.661104 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:33:36.661116 | orchestrator | 2026-04-09 05:33:36.661127 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-09 05:33:36.661138 | orchestrator | Thursday 09 April 2026 05:32:49 +0000 (0:00:00.813) 0:21:51.831 ******** 2026-04-09 05:33:36.661149 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:33:36.661160 | orchestrator | 2026-04-09 05:33:36.661172 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 05:33:36.661185 | orchestrator | Thursday 09 April 2026 05:32:50 +0000 (0:00:00.932) 0:21:52.764 ******** 2026-04-09 05:33:36.661196 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:33:36.661207 | orchestrator | 2026-04-09 05:33:36.661218 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 05:33:36.661229 | orchestrator | Thursday 09 April 2026 05:32:51 +0000 (0:00:00.778) 0:21:53.543 ******** 2026-04-09 05:33:36.661240 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:33:36.661251 | orchestrator | 2026-04-09 05:33:36.661262 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 05:33:36.661273 | orchestrator | Thursday 09 April 2026 05:32:52 +0000 (0:00:00.780) 0:21:54.324 ******** 2026-04-09 05:33:36.661284 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:33:36.661295 | orchestrator | 2026-04-09 05:33:36.661330 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 05:33:36.661345 | orchestrator | Thursday 09 April 2026 05:32:53 +0000 (0:00:00.799) 0:21:55.124 ******** 2026-04-09 05:33:36.661358 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:33:36.661371 | orchestrator | 2026-04-09 05:33:36.661384 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 05:33:36.661398 | orchestrator | Thursday 09 April 2026 05:32:54 +0000 (0:00:00.777) 0:21:55.901 ******** 2026-04-09 05:33:36.661410 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-09 05:33:36.661423 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-09 05:33:36.661436 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-09 05:33:36.661450 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:33:36.661462 | orchestrator | 2026-04-09 05:33:36.661475 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 05:33:36.661489 | orchestrator | Thursday 09 April 2026 05:32:55 +0000 (0:00:01.395) 0:21:57.298 ******** 2026-04-09 05:33:36.661502 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-09 05:33:36.661514 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-09 05:33:36.661527 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-09 05:33:36.661540 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:33:36.661552 | orchestrator | 2026-04-09 05:33:36.661566 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 05:33:36.661579 | orchestrator | Thursday 09 April 2026 05:32:56 +0000 (0:00:01.428) 0:21:58.726 ******** 2026-04-09 05:33:36.661591 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-09 05:33:36.661632 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-09 05:33:36.661645 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-09 05:33:36.661657 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:33:36.661670 | orchestrator | 2026-04-09 05:33:36.661682 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 05:33:36.661693 | orchestrator | Thursday 09 April 2026 05:32:57 +0000 (0:00:01.070) 0:21:59.796 ******** 2026-04-09 05:33:36.661704 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:33:36.661715 | orchestrator | 2026-04-09 05:33:36.661726 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 05:33:36.661737 | orchestrator | Thursday 09 April 2026 05:32:58 +0000 (0:00:00.777) 0:22:00.574 ******** 2026-04-09 05:33:36.661748 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-09 05:33:36.661759 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:33:36.661770 | orchestrator | 2026-04-09 05:33:36.661781 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-09 05:33:36.661792 | orchestrator | Thursday 09 April 2026 05:32:59 +0000 (0:00:00.905) 0:22:01.480 ******** 2026-04-09 05:33:36.661802 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:33:36.661813 | orchestrator | 2026-04-09 05:33:36.661824 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-09 05:33:36.661850 | orchestrator | Thursday 09 April 2026 05:33:00 +0000 (0:00:00.780) 0:22:02.261 ******** 2026-04-09 05:33:36.661861 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-09 05:33:36.661890 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-09 05:33:36.661902 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-09 05:33:36.661913 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:33:36.661924 | orchestrator | 2026-04-09 05:33:36.661935 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-09 05:33:36.661946 | orchestrator | Thursday 09 April 2026 05:33:01 +0000 (0:00:01.094) 0:22:03.355 ******** 2026-04-09 05:33:36.661957 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:33:36.661968 | orchestrator | 2026-04-09 05:33:36.661979 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-09 05:33:36.661998 | orchestrator | Thursday 09 April 2026 05:33:02 +0000 (0:00:00.812) 0:22:04.168 ******** 2026-04-09 05:33:36.662009 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:33:36.662090 | orchestrator | 2026-04-09 05:33:36.662109 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-09 05:33:36.662138 | orchestrator | Thursday 09 April 2026 05:33:03 +0000 (0:00:00.781) 0:22:04.950 ******** 2026-04-09 05:33:36.662156 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:33:36.662174 | orchestrator | 2026-04-09 05:33:36.662192 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-09 05:33:36.662211 | orchestrator | Thursday 09 April 2026 05:33:03 +0000 (0:00:00.771) 0:22:05.722 ******** 2026-04-09 05:33:36.662229 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:33:36.662247 | orchestrator | 2026-04-09 05:33:36.662266 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-04-09 05:33:36.662285 | orchestrator | 2026-04-09 05:33:36.662304 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-09 05:33:36.662324 | orchestrator | Thursday 09 April 2026 05:33:05 +0000 (0:00:01.388) 0:22:07.111 ******** 2026-04-09 05:33:36.662343 | orchestrator | changed: [testbed-node-0] 2026-04-09 05:33:36.662356 | orchestrator | 2026-04-09 05:33:36.662367 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-04-09 05:33:36.662378 | orchestrator | Thursday 09 April 2026 05:33:18 +0000 (0:00:13.035) 0:22:20.146 ******** 2026-04-09 05:33:36.662389 | orchestrator | changed: [testbed-node-0] 2026-04-09 05:33:36.662400 | orchestrator | 2026-04-09 05:33:36.662410 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 05:33:36.662421 | orchestrator | Thursday 09 April 2026 05:33:20 +0000 (0:00:02.462) 0:22:22.609 ******** 2026-04-09 05:33:36.662432 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-04-09 05:33:36.662442 | orchestrator | 2026-04-09 05:33:36.662453 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-09 05:33:36.662464 | orchestrator | Thursday 09 April 2026 05:33:21 +0000 (0:00:01.145) 0:22:23.754 ******** 2026-04-09 05:33:36.662475 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:33:36.662486 | orchestrator | 2026-04-09 05:33:36.662496 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-09 05:33:36.662507 | orchestrator | Thursday 09 April 2026 05:33:23 +0000 (0:00:01.526) 0:22:25.281 ******** 2026-04-09 05:33:36.662518 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:33:36.662528 | orchestrator | 2026-04-09 05:33:36.662539 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 05:33:36.662549 | orchestrator | Thursday 09 April 2026 05:33:24 +0000 (0:00:01.129) 0:22:26.410 ******** 2026-04-09 05:33:36.662560 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:33:36.662571 | orchestrator | 2026-04-09 05:33:36.662582 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 05:33:36.662593 | orchestrator | Thursday 09 April 2026 05:33:26 +0000 (0:00:01.560) 0:22:27.971 ******** 2026-04-09 05:33:36.662631 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:33:36.662643 | orchestrator | 2026-04-09 05:33:36.662653 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-09 05:33:36.662664 | orchestrator | Thursday 09 April 2026 05:33:27 +0000 (0:00:01.157) 0:22:29.128 ******** 2026-04-09 05:33:36.662675 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:33:36.662686 | orchestrator | 2026-04-09 05:33:36.662697 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-09 05:33:36.662707 | orchestrator | Thursday 09 April 2026 05:33:28 +0000 (0:00:01.137) 0:22:30.265 ******** 2026-04-09 05:33:36.662718 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:33:36.662729 | orchestrator | 2026-04-09 05:33:36.662740 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-09 05:33:36.662752 | orchestrator | Thursday 09 April 2026 05:33:29 +0000 (0:00:01.157) 0:22:31.422 ******** 2026-04-09 05:33:36.662772 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:33:36.662784 | orchestrator | 2026-04-09 05:33:36.662794 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-09 05:33:36.662805 | orchestrator | Thursday 09 April 2026 05:33:30 +0000 (0:00:01.289) 0:22:32.712 ******** 2026-04-09 05:33:36.662816 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:33:36.662827 | orchestrator | 2026-04-09 05:33:36.662838 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-09 05:33:36.662849 | orchestrator | Thursday 09 April 2026 05:33:31 +0000 (0:00:01.140) 0:22:33.853 ******** 2026-04-09 05:33:36.662860 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 05:33:36.662871 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:33:36.662882 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:33:36.662893 | orchestrator | 2026-04-09 05:33:36.662904 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-09 05:33:36.662914 | orchestrator | Thursday 09 April 2026 05:33:34 +0000 (0:00:02.131) 0:22:35.984 ******** 2026-04-09 05:33:36.662925 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:33:36.662936 | orchestrator | 2026-04-09 05:33:36.662947 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-09 05:33:36.662966 | orchestrator | Thursday 09 April 2026 05:33:35 +0000 (0:00:01.252) 0:22:37.236 ******** 2026-04-09 05:33:36.662977 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 05:33:36.662999 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:34:00.425214 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:34:00.425341 | orchestrator | 2026-04-09 05:34:00.425359 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-09 05:34:00.425374 | orchestrator | Thursday 09 April 2026 05:33:38 +0000 (0:00:02.933) 0:22:40.170 ******** 2026-04-09 05:34:00.425386 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 05:34:00.425398 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 05:34:00.425410 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 05:34:00.425422 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:00.425433 | orchestrator | 2026-04-09 05:34:00.425445 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-09 05:34:00.425456 | orchestrator | Thursday 09 April 2026 05:33:39 +0000 (0:00:01.432) 0:22:41.603 ******** 2026-04-09 05:34:00.425470 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 05:34:00.425484 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 05:34:00.425496 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 05:34:00.425507 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:00.425518 | orchestrator | 2026-04-09 05:34:00.425530 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-09 05:34:00.425541 | orchestrator | Thursday 09 April 2026 05:33:41 +0000 (0:00:01.663) 0:22:43.266 ******** 2026-04-09 05:34:00.425555 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:34:00.425659 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:34:00.425676 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:34:00.425688 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:00.425699 | orchestrator | 2026-04-09 05:34:00.425710 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-09 05:34:00.425721 | orchestrator | Thursday 09 April 2026 05:33:42 +0000 (0:00:01.207) 0:22:44.474 ******** 2026-04-09 05:34:00.425734 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '69d38aa54653', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-09 05:33:35.922530', 'end': '2026-04-09 05:33:35.979151', 'delta': '0:00:00.056621', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['69d38aa54653'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-09 05:34:00.425783 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '3e7867c40460', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-09 05:33:36.549658', 'end': '2026-04-09 05:33:36.597249', 'delta': '0:00:00.047591', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3e7867c40460'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-09 05:34:00.425797 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '5ed6058fb18c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-09 05:33:37.104266', 'end': '2026-04-09 05:33:37.149029', 'delta': '0:00:00.044763', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5ed6058fb18c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-09 05:34:00.425809 | orchestrator | 2026-04-09 05:34:00.425820 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-09 05:34:00.425831 | orchestrator | Thursday 09 April 2026 05:33:43 +0000 (0:00:01.182) 0:22:45.656 ******** 2026-04-09 05:34:00.425842 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:34:00.425863 | orchestrator | 2026-04-09 05:34:00.425874 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-09 05:34:00.425885 | orchestrator | Thursday 09 April 2026 05:33:45 +0000 (0:00:01.260) 0:22:46.916 ******** 2026-04-09 05:34:00.425896 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:00.425907 | orchestrator | 2026-04-09 05:34:00.425918 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-09 05:34:00.425929 | orchestrator | Thursday 09 April 2026 05:33:46 +0000 (0:00:01.232) 0:22:48.149 ******** 2026-04-09 05:34:00.425940 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:34:00.425951 | orchestrator | 2026-04-09 05:34:00.425962 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-09 05:34:00.425973 | orchestrator | Thursday 09 April 2026 05:33:47 +0000 (0:00:01.117) 0:22:49.266 ******** 2026-04-09 05:34:00.425984 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:34:00.425995 | orchestrator | 2026-04-09 05:34:00.426005 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 05:34:00.426105 | orchestrator | Thursday 09 April 2026 05:33:49 +0000 (0:00:02.048) 0:22:51.315 ******** 2026-04-09 05:34:00.426129 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:34:00.426142 | orchestrator | 2026-04-09 05:34:00.426153 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-09 05:34:00.426164 | orchestrator | Thursday 09 April 2026 05:33:50 +0000 (0:00:01.127) 0:22:52.442 ******** 2026-04-09 05:34:00.426175 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:00.426186 | orchestrator | 2026-04-09 05:34:00.426197 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-09 05:34:00.426208 | orchestrator | Thursday 09 April 2026 05:33:51 +0000 (0:00:01.145) 0:22:53.587 ******** 2026-04-09 05:34:00.426219 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:00.426230 | orchestrator | 2026-04-09 05:34:00.426241 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 05:34:00.426252 | orchestrator | Thursday 09 April 2026 05:33:53 +0000 (0:00:01.635) 0:22:55.223 ******** 2026-04-09 05:34:00.426262 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:00.426273 | orchestrator | 2026-04-09 05:34:00.426284 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-09 05:34:00.426295 | orchestrator | Thursday 09 April 2026 05:33:54 +0000 (0:00:01.130) 0:22:56.353 ******** 2026-04-09 05:34:00.426306 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:00.426317 | orchestrator | 2026-04-09 05:34:00.426328 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-09 05:34:00.426339 | orchestrator | Thursday 09 April 2026 05:33:55 +0000 (0:00:01.147) 0:22:57.501 ******** 2026-04-09 05:34:00.426349 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:00.426360 | orchestrator | 2026-04-09 05:34:00.426371 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-09 05:34:00.426382 | orchestrator | Thursday 09 April 2026 05:33:56 +0000 (0:00:01.203) 0:22:58.704 ******** 2026-04-09 05:34:00.426393 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:00.426403 | orchestrator | 2026-04-09 05:34:00.426414 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-09 05:34:00.426425 | orchestrator | Thursday 09 April 2026 05:33:57 +0000 (0:00:01.109) 0:22:59.814 ******** 2026-04-09 05:34:00.426436 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:00.426447 | orchestrator | 2026-04-09 05:34:00.426458 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-09 05:34:00.426469 | orchestrator | Thursday 09 April 2026 05:33:59 +0000 (0:00:01.145) 0:23:00.960 ******** 2026-04-09 05:34:00.426479 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:00.426490 | orchestrator | 2026-04-09 05:34:00.426501 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-09 05:34:00.426519 | orchestrator | Thursday 09 April 2026 05:34:00 +0000 (0:00:01.177) 0:23:02.137 ******** 2026-04-09 05:34:00.426530 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:00.426550 | orchestrator | 2026-04-09 05:34:00.426571 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-09 05:34:02.836199 | orchestrator | Thursday 09 April 2026 05:34:01 +0000 (0:00:01.160) 0:23:03.298 ******** 2026-04-09 05:34:02.836331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:34:02.836355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:34:02.836369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:34:02.836382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-17-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-09 05:34:02.836397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:34:02.836409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:34:02.836421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:34:02.836479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '78f51fbd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part16', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part14', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part15', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part1', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 05:34:02.836517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:34:02.836530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:34:02.836542 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:02.836555 | orchestrator | 2026-04-09 05:34:02.836567 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-09 05:34:02.836580 | orchestrator | Thursday 09 April 2026 05:34:02 +0000 (0:00:01.309) 0:23:04.608 ******** 2026-04-09 05:34:02.836592 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:34:02.836640 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:34:02.836674 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:34:13.999139 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-17-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:34:13.999260 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:34:13.999279 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:34:13.999294 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:34:13.999348 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '78f51fbd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part16', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part14', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part15', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part1', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:34:13.999394 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:34:13.999409 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:34:13.999425 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:13.999440 | orchestrator | 2026-04-09 05:34:13.999455 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-09 05:34:13.999471 | orchestrator | Thursday 09 April 2026 05:34:04 +0000 (0:00:01.274) 0:23:05.882 ******** 2026-04-09 05:34:13.999484 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:34:13.999499 | orchestrator | 2026-04-09 05:34:13.999513 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-09 05:34:13.999526 | orchestrator | Thursday 09 April 2026 05:34:05 +0000 (0:00:01.539) 0:23:07.421 ******** 2026-04-09 05:34:13.999540 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:34:13.999553 | orchestrator | 2026-04-09 05:34:13.999567 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 05:34:13.999581 | orchestrator | Thursday 09 April 2026 05:34:06 +0000 (0:00:01.110) 0:23:08.532 ******** 2026-04-09 05:34:13.999649 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:34:13.999665 | orchestrator | 2026-04-09 05:34:13.999681 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 05:34:13.999733 | orchestrator | Thursday 09 April 2026 05:34:08 +0000 (0:00:01.443) 0:23:09.975 ******** 2026-04-09 05:34:13.999748 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:13.999763 | orchestrator | 2026-04-09 05:34:13.999778 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 05:34:13.999793 | orchestrator | Thursday 09 April 2026 05:34:09 +0000 (0:00:01.111) 0:23:11.087 ******** 2026-04-09 05:34:13.999808 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:13.999824 | orchestrator | 2026-04-09 05:34:13.999839 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 05:34:13.999852 | orchestrator | Thursday 09 April 2026 05:34:10 +0000 (0:00:01.703) 0:23:12.790 ******** 2026-04-09 05:34:13.999866 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:13.999879 | orchestrator | 2026-04-09 05:34:13.999893 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 05:34:13.999908 | orchestrator | Thursday 09 April 2026 05:34:12 +0000 (0:00:01.207) 0:23:13.998 ******** 2026-04-09 05:34:13.999923 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 05:34:13.999938 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-09 05:34:13.999954 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-09 05:34:13.999969 | orchestrator | 2026-04-09 05:34:13.999984 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 05:34:13.999999 | orchestrator | Thursday 09 April 2026 05:34:13 +0000 (0:00:01.686) 0:23:15.684 ******** 2026-04-09 05:34:14.000020 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 05:34:14.000036 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 05:34:14.000049 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 05:34:14.000064 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:14.000077 | orchestrator | 2026-04-09 05:34:14.000100 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-09 05:34:57.862942 | orchestrator | Thursday 09 April 2026 05:34:15 +0000 (0:00:01.230) 0:23:16.915 ******** 2026-04-09 05:34:57.863052 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:57.863063 | orchestrator | 2026-04-09 05:34:57.863072 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-09 05:34:57.863079 | orchestrator | Thursday 09 April 2026 05:34:16 +0000 (0:00:01.156) 0:23:18.071 ******** 2026-04-09 05:34:57.863087 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 05:34:57.863095 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:34:57.863103 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:34:57.863111 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 05:34:57.863118 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 05:34:57.863125 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 05:34:57.863132 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 05:34:57.863139 | orchestrator | 2026-04-09 05:34:57.863146 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-09 05:34:57.863153 | orchestrator | Thursday 09 April 2026 05:34:18 +0000 (0:00:01.807) 0:23:19.879 ******** 2026-04-09 05:34:57.863161 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 05:34:57.863168 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:34:57.863175 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:34:57.863182 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 05:34:57.863189 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 05:34:57.863216 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 05:34:57.863223 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 05:34:57.863231 | orchestrator | 2026-04-09 05:34:57.863238 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 05:34:57.863245 | orchestrator | Thursday 09 April 2026 05:34:20 +0000 (0:00:02.525) 0:23:22.404 ******** 2026-04-09 05:34:57.863252 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-04-09 05:34:57.863259 | orchestrator | 2026-04-09 05:34:57.863266 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 05:34:57.863273 | orchestrator | Thursday 09 April 2026 05:34:21 +0000 (0:00:01.082) 0:23:23.487 ******** 2026-04-09 05:34:57.863280 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-04-09 05:34:57.863287 | orchestrator | 2026-04-09 05:34:57.863294 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 05:34:57.863301 | orchestrator | Thursday 09 April 2026 05:34:22 +0000 (0:00:01.145) 0:23:24.633 ******** 2026-04-09 05:34:57.863309 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:34:57.863316 | orchestrator | 2026-04-09 05:34:57.863323 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 05:34:57.863330 | orchestrator | Thursday 09 April 2026 05:34:24 +0000 (0:00:01.583) 0:23:26.217 ******** 2026-04-09 05:34:57.863337 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:57.863344 | orchestrator | 2026-04-09 05:34:57.863351 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 05:34:57.863358 | orchestrator | Thursday 09 April 2026 05:34:25 +0000 (0:00:01.124) 0:23:27.341 ******** 2026-04-09 05:34:57.863365 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:57.863372 | orchestrator | 2026-04-09 05:34:57.863379 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 05:34:57.863386 | orchestrator | Thursday 09 April 2026 05:34:26 +0000 (0:00:01.104) 0:23:28.446 ******** 2026-04-09 05:34:57.863393 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:57.863400 | orchestrator | 2026-04-09 05:34:57.863407 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 05:34:57.863414 | orchestrator | Thursday 09 April 2026 05:34:27 +0000 (0:00:01.157) 0:23:29.604 ******** 2026-04-09 05:34:57.863421 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:34:57.863428 | orchestrator | 2026-04-09 05:34:57.863435 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 05:34:57.863441 | orchestrator | Thursday 09 April 2026 05:34:29 +0000 (0:00:01.555) 0:23:31.159 ******** 2026-04-09 05:34:57.863447 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:57.863453 | orchestrator | 2026-04-09 05:34:57.863459 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 05:34:57.863466 | orchestrator | Thursday 09 April 2026 05:34:30 +0000 (0:00:01.143) 0:23:32.302 ******** 2026-04-09 05:34:57.863473 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:57.863480 | orchestrator | 2026-04-09 05:34:57.863487 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 05:34:57.863494 | orchestrator | Thursday 09 April 2026 05:34:31 +0000 (0:00:01.125) 0:23:33.428 ******** 2026-04-09 05:34:57.863501 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:34:57.863511 | orchestrator | 2026-04-09 05:34:57.863545 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 05:34:57.863555 | orchestrator | Thursday 09 April 2026 05:34:33 +0000 (0:00:01.652) 0:23:35.081 ******** 2026-04-09 05:34:57.863566 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:34:57.863576 | orchestrator | 2026-04-09 05:34:57.863586 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 05:34:57.863632 | orchestrator | Thursday 09 April 2026 05:34:34 +0000 (0:00:01.662) 0:23:36.743 ******** 2026-04-09 05:34:57.863648 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:57.863657 | orchestrator | 2026-04-09 05:34:57.863667 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 05:34:57.863677 | orchestrator | Thursday 09 April 2026 05:34:35 +0000 (0:00:01.124) 0:23:37.867 ******** 2026-04-09 05:34:57.863688 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:34:57.863698 | orchestrator | 2026-04-09 05:34:57.863709 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 05:34:57.863719 | orchestrator | Thursday 09 April 2026 05:34:37 +0000 (0:00:01.124) 0:23:38.991 ******** 2026-04-09 05:34:57.863725 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:57.863732 | orchestrator | 2026-04-09 05:34:57.863739 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 05:34:57.863746 | orchestrator | Thursday 09 April 2026 05:34:38 +0000 (0:00:01.128) 0:23:40.120 ******** 2026-04-09 05:34:57.863753 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:57.863760 | orchestrator | 2026-04-09 05:34:57.863767 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 05:34:57.863773 | orchestrator | Thursday 09 April 2026 05:34:39 +0000 (0:00:01.129) 0:23:41.250 ******** 2026-04-09 05:34:57.863780 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:57.863787 | orchestrator | 2026-04-09 05:34:57.863794 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 05:34:57.863801 | orchestrator | Thursday 09 April 2026 05:34:40 +0000 (0:00:01.124) 0:23:42.375 ******** 2026-04-09 05:34:57.863807 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:57.863814 | orchestrator | 2026-04-09 05:34:57.863821 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 05:34:57.863828 | orchestrator | Thursday 09 April 2026 05:34:41 +0000 (0:00:01.140) 0:23:43.516 ******** 2026-04-09 05:34:57.863835 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:57.863841 | orchestrator | 2026-04-09 05:34:57.863848 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 05:34:57.863855 | orchestrator | Thursday 09 April 2026 05:34:42 +0000 (0:00:01.192) 0:23:44.708 ******** 2026-04-09 05:34:57.863862 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:34:57.863869 | orchestrator | 2026-04-09 05:34:57.863875 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 05:34:57.863882 | orchestrator | Thursday 09 April 2026 05:34:44 +0000 (0:00:01.187) 0:23:45.896 ******** 2026-04-09 05:34:57.863889 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:34:57.863896 | orchestrator | 2026-04-09 05:34:57.863903 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 05:34:57.863910 | orchestrator | Thursday 09 April 2026 05:34:45 +0000 (0:00:01.152) 0:23:47.048 ******** 2026-04-09 05:34:57.863916 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:34:57.863923 | orchestrator | 2026-04-09 05:34:57.863930 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-09 05:34:57.863937 | orchestrator | Thursday 09 April 2026 05:34:46 +0000 (0:00:01.192) 0:23:48.241 ******** 2026-04-09 05:34:57.863944 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:57.863951 | orchestrator | 2026-04-09 05:34:57.863957 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-09 05:34:57.863964 | orchestrator | Thursday 09 April 2026 05:34:47 +0000 (0:00:01.135) 0:23:49.376 ******** 2026-04-09 05:34:57.863971 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:57.863978 | orchestrator | 2026-04-09 05:34:57.863985 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-09 05:34:57.863991 | orchestrator | Thursday 09 April 2026 05:34:48 +0000 (0:00:01.109) 0:23:50.486 ******** 2026-04-09 05:34:57.863998 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:57.864005 | orchestrator | 2026-04-09 05:34:57.864012 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-09 05:34:57.864019 | orchestrator | Thursday 09 April 2026 05:34:49 +0000 (0:00:01.116) 0:23:51.602 ******** 2026-04-09 05:34:57.864030 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:57.864037 | orchestrator | 2026-04-09 05:34:57.864044 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-09 05:34:57.864051 | orchestrator | Thursday 09 April 2026 05:34:50 +0000 (0:00:01.137) 0:23:52.740 ******** 2026-04-09 05:34:57.864058 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:57.864064 | orchestrator | 2026-04-09 05:34:57.864071 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-09 05:34:57.864078 | orchestrator | Thursday 09 April 2026 05:34:52 +0000 (0:00:01.151) 0:23:53.892 ******** 2026-04-09 05:34:57.864085 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:57.864091 | orchestrator | 2026-04-09 05:34:57.864098 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-09 05:34:57.864105 | orchestrator | Thursday 09 April 2026 05:34:53 +0000 (0:00:01.126) 0:23:55.018 ******** 2026-04-09 05:34:57.864112 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:57.864119 | orchestrator | 2026-04-09 05:34:57.864126 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-09 05:34:57.864133 | orchestrator | Thursday 09 April 2026 05:34:54 +0000 (0:00:01.122) 0:23:56.141 ******** 2026-04-09 05:34:57.864139 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:57.864146 | orchestrator | 2026-04-09 05:34:57.864153 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-09 05:34:57.864160 | orchestrator | Thursday 09 April 2026 05:34:55 +0000 (0:00:01.220) 0:23:57.362 ******** 2026-04-09 05:34:57.864167 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:57.864173 | orchestrator | 2026-04-09 05:34:57.864180 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-09 05:34:57.864190 | orchestrator | Thursday 09 April 2026 05:34:56 +0000 (0:00:01.095) 0:23:58.457 ******** 2026-04-09 05:34:57.864197 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:57.864204 | orchestrator | 2026-04-09 05:34:57.864211 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-09 05:34:57.864218 | orchestrator | Thursday 09 April 2026 05:34:57 +0000 (0:00:01.134) 0:23:59.592 ******** 2026-04-09 05:34:57.864225 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:34:57.864232 | orchestrator | 2026-04-09 05:34:57.864242 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-09 05:35:47.116069 | orchestrator | Thursday 09 April 2026 05:34:58 +0000 (0:00:01.177) 0:24:00.770 ******** 2026-04-09 05:35:47.116192 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:35:47.116210 | orchestrator | 2026-04-09 05:35:47.116222 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-09 05:35:47.116234 | orchestrator | Thursday 09 April 2026 05:35:00 +0000 (0:00:01.149) 0:24:01.919 ******** 2026-04-09 05:35:47.116246 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:35:47.116258 | orchestrator | 2026-04-09 05:35:47.116269 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-09 05:35:47.116281 | orchestrator | Thursday 09 April 2026 05:35:02 +0000 (0:00:01.984) 0:24:03.903 ******** 2026-04-09 05:35:47.116292 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:35:47.116303 | orchestrator | 2026-04-09 05:35:47.116314 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-09 05:35:47.116326 | orchestrator | Thursday 09 April 2026 05:35:04 +0000 (0:00:02.405) 0:24:06.309 ******** 2026-04-09 05:35:47.116337 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-04-09 05:35:47.116349 | orchestrator | 2026-04-09 05:35:47.116360 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-09 05:35:47.116372 | orchestrator | Thursday 09 April 2026 05:35:05 +0000 (0:00:01.196) 0:24:07.506 ******** 2026-04-09 05:35:47.116383 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:35:47.116394 | orchestrator | 2026-04-09 05:35:47.116405 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-09 05:35:47.116441 | orchestrator | Thursday 09 April 2026 05:35:06 +0000 (0:00:01.108) 0:24:08.614 ******** 2026-04-09 05:35:47.116453 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:35:47.116464 | orchestrator | 2026-04-09 05:35:47.116476 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-09 05:35:47.116487 | orchestrator | Thursday 09 April 2026 05:35:07 +0000 (0:00:01.116) 0:24:09.731 ******** 2026-04-09 05:35:47.116498 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 05:35:47.116509 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 05:35:47.116521 | orchestrator | 2026-04-09 05:35:47.116533 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-09 05:35:47.116544 | orchestrator | Thursday 09 April 2026 05:35:09 +0000 (0:00:01.793) 0:24:11.525 ******** 2026-04-09 05:35:47.116555 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:35:47.116566 | orchestrator | 2026-04-09 05:35:47.116577 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-09 05:35:47.116623 | orchestrator | Thursday 09 April 2026 05:35:11 +0000 (0:00:01.520) 0:24:13.045 ******** 2026-04-09 05:35:47.116646 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:35:47.116667 | orchestrator | 2026-04-09 05:35:47.116686 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-09 05:35:47.116704 | orchestrator | Thursday 09 April 2026 05:35:12 +0000 (0:00:01.226) 0:24:14.272 ******** 2026-04-09 05:35:47.116717 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:35:47.116730 | orchestrator | 2026-04-09 05:35:47.116743 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-09 05:35:47.116756 | orchestrator | Thursday 09 April 2026 05:35:13 +0000 (0:00:01.160) 0:24:15.433 ******** 2026-04-09 05:35:47.116770 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:35:47.116783 | orchestrator | 2026-04-09 05:35:47.116795 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-09 05:35:47.116808 | orchestrator | Thursday 09 April 2026 05:35:14 +0000 (0:00:01.155) 0:24:16.588 ******** 2026-04-09 05:35:47.116821 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-04-09 05:35:47.116833 | orchestrator | 2026-04-09 05:35:47.116845 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-09 05:35:47.116858 | orchestrator | Thursday 09 April 2026 05:35:15 +0000 (0:00:01.150) 0:24:17.738 ******** 2026-04-09 05:35:47.116870 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:35:47.116882 | orchestrator | 2026-04-09 05:35:47.116896 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-09 05:35:47.116908 | orchestrator | Thursday 09 April 2026 05:35:17 +0000 (0:00:01.879) 0:24:19.618 ******** 2026-04-09 05:35:47.116922 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 05:35:47.116934 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 05:35:47.116947 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 05:35:47.116959 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:35:47.116970 | orchestrator | 2026-04-09 05:35:47.116981 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-09 05:35:47.116991 | orchestrator | Thursday 09 April 2026 05:35:18 +0000 (0:00:01.210) 0:24:20.829 ******** 2026-04-09 05:35:47.117002 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:35:47.117013 | orchestrator | 2026-04-09 05:35:47.117023 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-09 05:35:47.117034 | orchestrator | Thursday 09 April 2026 05:35:20 +0000 (0:00:01.131) 0:24:21.960 ******** 2026-04-09 05:35:47.117045 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:35:47.117056 | orchestrator | 2026-04-09 05:35:47.117067 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-09 05:35:47.117101 | orchestrator | Thursday 09 April 2026 05:35:21 +0000 (0:00:01.148) 0:24:23.108 ******** 2026-04-09 05:35:47.117113 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:35:47.117124 | orchestrator | 2026-04-09 05:35:47.117135 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-09 05:35:47.117145 | orchestrator | Thursday 09 April 2026 05:35:22 +0000 (0:00:01.136) 0:24:24.245 ******** 2026-04-09 05:35:47.117156 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:35:47.117167 | orchestrator | 2026-04-09 05:35:47.117196 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-09 05:35:47.117208 | orchestrator | Thursday 09 April 2026 05:35:23 +0000 (0:00:01.127) 0:24:25.373 ******** 2026-04-09 05:35:47.117218 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:35:47.117229 | orchestrator | 2026-04-09 05:35:47.117240 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-09 05:35:47.117251 | orchestrator | Thursday 09 April 2026 05:35:24 +0000 (0:00:01.134) 0:24:26.507 ******** 2026-04-09 05:35:47.117262 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:35:47.117273 | orchestrator | 2026-04-09 05:35:47.117284 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-09 05:35:47.117295 | orchestrator | Thursday 09 April 2026 05:35:27 +0000 (0:00:02.625) 0:24:29.133 ******** 2026-04-09 05:35:47.117306 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:35:47.117316 | orchestrator | 2026-04-09 05:35:47.117327 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-09 05:35:47.117338 | orchestrator | Thursday 09 April 2026 05:35:28 +0000 (0:00:01.151) 0:24:30.284 ******** 2026-04-09 05:35:47.117349 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-04-09 05:35:47.117360 | orchestrator | 2026-04-09 05:35:47.117371 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-09 05:35:47.117381 | orchestrator | Thursday 09 April 2026 05:35:29 +0000 (0:00:01.118) 0:24:31.403 ******** 2026-04-09 05:35:47.117392 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:35:47.117403 | orchestrator | 2026-04-09 05:35:47.117414 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-09 05:35:47.117424 | orchestrator | Thursday 09 April 2026 05:35:30 +0000 (0:00:01.142) 0:24:32.546 ******** 2026-04-09 05:35:47.117435 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:35:47.117446 | orchestrator | 2026-04-09 05:35:47.117457 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-09 05:35:47.117468 | orchestrator | Thursday 09 April 2026 05:35:31 +0000 (0:00:01.162) 0:24:33.708 ******** 2026-04-09 05:35:47.117479 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:35:47.117490 | orchestrator | 2026-04-09 05:35:47.117500 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-09 05:35:47.117511 | orchestrator | Thursday 09 April 2026 05:35:33 +0000 (0:00:01.171) 0:24:34.880 ******** 2026-04-09 05:35:47.117522 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:35:47.117533 | orchestrator | 2026-04-09 05:35:47.117544 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-09 05:35:47.117555 | orchestrator | Thursday 09 April 2026 05:35:34 +0000 (0:00:01.153) 0:24:36.034 ******** 2026-04-09 05:35:47.117565 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:35:47.117576 | orchestrator | 2026-04-09 05:35:47.117587 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-09 05:35:47.117635 | orchestrator | Thursday 09 April 2026 05:35:35 +0000 (0:00:01.243) 0:24:37.277 ******** 2026-04-09 05:35:47.117646 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:35:47.117657 | orchestrator | 2026-04-09 05:35:47.117668 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-09 05:35:47.117679 | orchestrator | Thursday 09 April 2026 05:35:36 +0000 (0:00:01.173) 0:24:38.451 ******** 2026-04-09 05:35:47.117690 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:35:47.117701 | orchestrator | 2026-04-09 05:35:47.117712 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-09 05:35:47.117730 | orchestrator | Thursday 09 April 2026 05:35:37 +0000 (0:00:01.129) 0:24:39.581 ******** 2026-04-09 05:35:47.117744 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:35:47.117763 | orchestrator | 2026-04-09 05:35:47.117789 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-09 05:35:47.117810 | orchestrator | Thursday 09 April 2026 05:35:38 +0000 (0:00:01.161) 0:24:40.742 ******** 2026-04-09 05:35:47.117828 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:35:47.117846 | orchestrator | 2026-04-09 05:35:47.117865 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-09 05:35:47.117882 | orchestrator | Thursday 09 April 2026 05:35:40 +0000 (0:00:01.164) 0:24:41.907 ******** 2026-04-09 05:35:47.117900 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-04-09 05:35:47.117919 | orchestrator | 2026-04-09 05:35:47.117937 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-09 05:35:47.117954 | orchestrator | Thursday 09 April 2026 05:35:41 +0000 (0:00:01.226) 0:24:43.133 ******** 2026-04-09 05:35:47.117973 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-04-09 05:35:47.117992 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-09 05:35:47.118012 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-09 05:35:47.118113 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-09 05:35:47.118132 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-09 05:35:47.118150 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-09 05:35:47.118188 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-09 05:35:47.118209 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-09 05:35:47.118243 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 05:35:47.118264 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 05:35:47.118283 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 05:35:47.118314 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 05:35:47.118333 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 05:35:47.118353 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 05:35:47.118372 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-04-09 05:35:47.118392 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-04-09 05:35:47.118411 | orchestrator | 2026-04-09 05:35:47.118443 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-09 05:36:40.301322 | orchestrator | Thursday 09 April 2026 05:35:48 +0000 (0:00:06.836) 0:24:49.970 ******** 2026-04-09 05:36:40.301415 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.301426 | orchestrator | 2026-04-09 05:36:40.301434 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-09 05:36:40.301442 | orchestrator | Thursday 09 April 2026 05:35:49 +0000 (0:00:01.127) 0:24:51.097 ******** 2026-04-09 05:36:40.301449 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.301456 | orchestrator | 2026-04-09 05:36:40.301463 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-09 05:36:40.301470 | orchestrator | Thursday 09 April 2026 05:35:50 +0000 (0:00:01.150) 0:24:52.247 ******** 2026-04-09 05:36:40.301477 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.301484 | orchestrator | 2026-04-09 05:36:40.301491 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-09 05:36:40.301498 | orchestrator | Thursday 09 April 2026 05:35:51 +0000 (0:00:01.114) 0:24:53.362 ******** 2026-04-09 05:36:40.301505 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.301512 | orchestrator | 2026-04-09 05:36:40.301519 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-09 05:36:40.301545 | orchestrator | Thursday 09 April 2026 05:35:52 +0000 (0:00:01.173) 0:24:54.535 ******** 2026-04-09 05:36:40.301553 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.301559 | orchestrator | 2026-04-09 05:36:40.301566 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-09 05:36:40.301573 | orchestrator | Thursday 09 April 2026 05:35:53 +0000 (0:00:01.181) 0:24:55.716 ******** 2026-04-09 05:36:40.301580 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.301587 | orchestrator | 2026-04-09 05:36:40.301644 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-09 05:36:40.301651 | orchestrator | Thursday 09 April 2026 05:35:54 +0000 (0:00:01.128) 0:24:56.845 ******** 2026-04-09 05:36:40.301658 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.301665 | orchestrator | 2026-04-09 05:36:40.301672 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-09 05:36:40.301679 | orchestrator | Thursday 09 April 2026 05:35:56 +0000 (0:00:01.178) 0:24:58.023 ******** 2026-04-09 05:36:40.301686 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.301693 | orchestrator | 2026-04-09 05:36:40.301700 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-09 05:36:40.301707 | orchestrator | Thursday 09 April 2026 05:35:57 +0000 (0:00:01.151) 0:24:59.174 ******** 2026-04-09 05:36:40.301714 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.301721 | orchestrator | 2026-04-09 05:36:40.301728 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-09 05:36:40.301735 | orchestrator | Thursday 09 April 2026 05:35:58 +0000 (0:00:01.121) 0:25:00.296 ******** 2026-04-09 05:36:40.301742 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.301749 | orchestrator | 2026-04-09 05:36:40.301756 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-09 05:36:40.301763 | orchestrator | Thursday 09 April 2026 05:35:59 +0000 (0:00:01.145) 0:25:01.441 ******** 2026-04-09 05:36:40.301769 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.301776 | orchestrator | 2026-04-09 05:36:40.301783 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-09 05:36:40.301790 | orchestrator | Thursday 09 April 2026 05:36:00 +0000 (0:00:01.144) 0:25:02.586 ******** 2026-04-09 05:36:40.301797 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.301804 | orchestrator | 2026-04-09 05:36:40.301811 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-09 05:36:40.301817 | orchestrator | Thursday 09 April 2026 05:36:01 +0000 (0:00:01.142) 0:25:03.729 ******** 2026-04-09 05:36:40.301824 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.301831 | orchestrator | 2026-04-09 05:36:40.301838 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-09 05:36:40.301845 | orchestrator | Thursday 09 April 2026 05:36:03 +0000 (0:00:01.271) 0:25:05.000 ******** 2026-04-09 05:36:40.301851 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.301858 | orchestrator | 2026-04-09 05:36:40.301865 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-09 05:36:40.301872 | orchestrator | Thursday 09 April 2026 05:36:04 +0000 (0:00:01.167) 0:25:06.168 ******** 2026-04-09 05:36:40.301879 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.301887 | orchestrator | 2026-04-09 05:36:40.301894 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-09 05:36:40.301903 | orchestrator | Thursday 09 April 2026 05:36:05 +0000 (0:00:01.321) 0:25:07.490 ******** 2026-04-09 05:36:40.301911 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.301919 | orchestrator | 2026-04-09 05:36:40.301927 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-09 05:36:40.301935 | orchestrator | Thursday 09 April 2026 05:36:06 +0000 (0:00:01.106) 0:25:08.597 ******** 2026-04-09 05:36:40.301943 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.301952 | orchestrator | 2026-04-09 05:36:40.301966 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 05:36:40.301976 | orchestrator | Thursday 09 April 2026 05:36:07 +0000 (0:00:01.100) 0:25:09.697 ******** 2026-04-09 05:36:40.301985 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.301993 | orchestrator | 2026-04-09 05:36:40.302054 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 05:36:40.302063 | orchestrator | Thursday 09 April 2026 05:36:08 +0000 (0:00:01.147) 0:25:10.845 ******** 2026-04-09 05:36:40.302070 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.302077 | orchestrator | 2026-04-09 05:36:40.302084 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 05:36:40.302090 | orchestrator | Thursday 09 April 2026 05:36:10 +0000 (0:00:01.148) 0:25:11.993 ******** 2026-04-09 05:36:40.302097 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.302104 | orchestrator | 2026-04-09 05:36:40.302124 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 05:36:40.302138 | orchestrator | Thursday 09 April 2026 05:36:11 +0000 (0:00:01.168) 0:25:13.162 ******** 2026-04-09 05:36:40.302146 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.302153 | orchestrator | 2026-04-09 05:36:40.302160 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 05:36:40.302166 | orchestrator | Thursday 09 April 2026 05:36:12 +0000 (0:00:01.217) 0:25:14.380 ******** 2026-04-09 05:36:40.302173 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-09 05:36:40.302181 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-09 05:36:40.302187 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-09 05:36:40.302194 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.302201 | orchestrator | 2026-04-09 05:36:40.302208 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 05:36:40.302215 | orchestrator | Thursday 09 April 2026 05:36:13 +0000 (0:00:01.402) 0:25:15.782 ******** 2026-04-09 05:36:40.302221 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-09 05:36:40.302228 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-09 05:36:40.302235 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-09 05:36:40.302242 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.302249 | orchestrator | 2026-04-09 05:36:40.302256 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 05:36:40.302263 | orchestrator | Thursday 09 April 2026 05:36:15 +0000 (0:00:01.847) 0:25:17.629 ******** 2026-04-09 05:36:40.302270 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-09 05:36:40.302277 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-09 05:36:40.302283 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-09 05:36:40.302290 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.302297 | orchestrator | 2026-04-09 05:36:40.302304 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 05:36:40.302311 | orchestrator | Thursday 09 April 2026 05:36:17 +0000 (0:00:01.834) 0:25:19.463 ******** 2026-04-09 05:36:40.302318 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.302325 | orchestrator | 2026-04-09 05:36:40.302331 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 05:36:40.302338 | orchestrator | Thursday 09 April 2026 05:36:18 +0000 (0:00:01.190) 0:25:20.654 ******** 2026-04-09 05:36:40.302346 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-09 05:36:40.302353 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.302359 | orchestrator | 2026-04-09 05:36:40.302366 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-09 05:36:40.302373 | orchestrator | Thursday 09 April 2026 05:36:20 +0000 (0:00:01.257) 0:25:21.912 ******** 2026-04-09 05:36:40.302380 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:36:40.302394 | orchestrator | 2026-04-09 05:36:40.302401 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-09 05:36:40.302407 | orchestrator | Thursday 09 April 2026 05:36:21 +0000 (0:00:01.775) 0:25:23.687 ******** 2026-04-09 05:36:40.302414 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 05:36:40.302421 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:36:40.302429 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:36:40.302436 | orchestrator | 2026-04-09 05:36:40.302443 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-09 05:36:40.302450 | orchestrator | Thursday 09 April 2026 05:36:23 +0000 (0:00:01.692) 0:25:25.380 ******** 2026-04-09 05:36:40.302456 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0 2026-04-09 05:36:40.302463 | orchestrator | 2026-04-09 05:36:40.302470 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-09 05:36:40.302477 | orchestrator | Thursday 09 April 2026 05:36:24 +0000 (0:00:01.441) 0:25:26.821 ******** 2026-04-09 05:36:40.302484 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:36:40.302491 | orchestrator | 2026-04-09 05:36:40.302497 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-09 05:36:40.302504 | orchestrator | Thursday 09 April 2026 05:36:26 +0000 (0:00:01.528) 0:25:28.350 ******** 2026-04-09 05:36:40.302511 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:36:40.302518 | orchestrator | 2026-04-09 05:36:40.302525 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-09 05:36:40.302531 | orchestrator | Thursday 09 April 2026 05:36:27 +0000 (0:00:01.175) 0:25:29.526 ******** 2026-04-09 05:36:40.302538 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-09 05:36:40.302545 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-09 05:36:40.302552 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-09 05:36:40.302559 | orchestrator | ok: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-09 05:36:40.302566 | orchestrator | 2026-04-09 05:36:40.302573 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-09 05:36:40.302579 | orchestrator | Thursday 09 April 2026 05:36:35 +0000 (0:00:07.757) 0:25:37.284 ******** 2026-04-09 05:36:40.302586 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:36:40.302606 | orchestrator | 2026-04-09 05:36:40.302614 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-09 05:36:40.302624 | orchestrator | Thursday 09 April 2026 05:36:36 +0000 (0:00:01.218) 0:25:38.502 ******** 2026-04-09 05:36:40.302631 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-09 05:36:40.302638 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 05:36:40.302645 | orchestrator | 2026-04-09 05:36:40.302652 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-09 05:36:40.302659 | orchestrator | Thursday 09 April 2026 05:36:40 +0000 (0:00:03.581) 0:25:42.083 ******** 2026-04-09 05:36:40.302670 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-09 05:37:27.602871 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-09 05:37:27.602986 | orchestrator | 2026-04-09 05:37:27.603003 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-09 05:37:27.603017 | orchestrator | Thursday 09 April 2026 05:36:42 +0000 (0:00:02.002) 0:25:44.086 ******** 2026-04-09 05:37:27.603028 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:37:27.603040 | orchestrator | 2026-04-09 05:37:27.603051 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-09 05:37:27.603063 | orchestrator | Thursday 09 April 2026 05:36:43 +0000 (0:00:01.545) 0:25:45.632 ******** 2026-04-09 05:37:27.603074 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:37:27.603085 | orchestrator | 2026-04-09 05:37:27.603096 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-09 05:37:27.603107 | orchestrator | Thursday 09 April 2026 05:36:44 +0000 (0:00:01.116) 0:25:46.748 ******** 2026-04-09 05:37:27.603142 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:37:27.603154 | orchestrator | 2026-04-09 05:37:27.603165 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-09 05:37:27.603176 | orchestrator | Thursday 09 April 2026 05:36:46 +0000 (0:00:01.151) 0:25:47.899 ******** 2026-04-09 05:37:27.603187 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0 2026-04-09 05:37:27.603198 | orchestrator | 2026-04-09 05:37:27.603209 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-09 05:37:27.603220 | orchestrator | Thursday 09 April 2026 05:36:47 +0000 (0:00:01.473) 0:25:49.373 ******** 2026-04-09 05:37:27.603231 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:37:27.603242 | orchestrator | 2026-04-09 05:37:27.603253 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-09 05:37:27.603264 | orchestrator | Thursday 09 April 2026 05:36:48 +0000 (0:00:01.190) 0:25:50.563 ******** 2026-04-09 05:37:27.603275 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:37:27.603286 | orchestrator | 2026-04-09 05:37:27.603296 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-09 05:37:27.603307 | orchestrator | Thursday 09 April 2026 05:36:49 +0000 (0:00:01.204) 0:25:51.768 ******** 2026-04-09 05:37:27.603319 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0 2026-04-09 05:37:27.603330 | orchestrator | 2026-04-09 05:37:27.603341 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-09 05:37:27.603352 | orchestrator | Thursday 09 April 2026 05:36:51 +0000 (0:00:01.488) 0:25:53.257 ******** 2026-04-09 05:37:27.603363 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:37:27.603374 | orchestrator | 2026-04-09 05:37:27.603385 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-09 05:37:27.603398 | orchestrator | Thursday 09 April 2026 05:36:53 +0000 (0:00:02.083) 0:25:55.340 ******** 2026-04-09 05:37:27.603411 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:37:27.603425 | orchestrator | 2026-04-09 05:37:27.603437 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-09 05:37:27.603450 | orchestrator | Thursday 09 April 2026 05:36:55 +0000 (0:00:01.990) 0:25:57.331 ******** 2026-04-09 05:37:27.603462 | orchestrator | ok: [testbed-node-0] 2026-04-09 05:37:27.603475 | orchestrator | 2026-04-09 05:37:27.603487 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-09 05:37:27.603500 | orchestrator | Thursday 09 April 2026 05:36:57 +0000 (0:00:02.423) 0:25:59.754 ******** 2026-04-09 05:37:27.603513 | orchestrator | changed: [testbed-node-0] 2026-04-09 05:37:27.603524 | orchestrator | 2026-04-09 05:37:27.603535 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-09 05:37:27.603545 | orchestrator | Thursday 09 April 2026 05:37:01 +0000 (0:00:03.920) 0:26:03.675 ******** 2026-04-09 05:37:27.603556 | orchestrator | skipping: [testbed-node-0] 2026-04-09 05:37:27.603567 | orchestrator | 2026-04-09 05:37:27.603578 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-04-09 05:37:27.603589 | orchestrator | 2026-04-09 05:37:27.603644 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-09 05:37:27.603656 | orchestrator | Thursday 09 April 2026 05:37:02 +0000 (0:00:01.086) 0:26:04.762 ******** 2026-04-09 05:37:27.603667 | orchestrator | changed: [testbed-node-1] 2026-04-09 05:37:27.603678 | orchestrator | 2026-04-09 05:37:27.603689 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-04-09 05:37:27.603700 | orchestrator | Thursday 09 April 2026 05:37:05 +0000 (0:00:02.580) 0:26:07.342 ******** 2026-04-09 05:37:27.603711 | orchestrator | changed: [testbed-node-1] 2026-04-09 05:37:27.603721 | orchestrator | 2026-04-09 05:37:27.603732 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 05:37:27.603743 | orchestrator | Thursday 09 April 2026 05:37:07 +0000 (0:00:02.049) 0:26:09.391 ******** 2026-04-09 05:37:27.603763 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-04-09 05:37:27.603774 | orchestrator | 2026-04-09 05:37:27.603785 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-09 05:37:27.603796 | orchestrator | Thursday 09 April 2026 05:37:08 +0000 (0:00:01.145) 0:26:10.537 ******** 2026-04-09 05:37:27.603807 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:37:27.603818 | orchestrator | 2026-04-09 05:37:27.603829 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-09 05:37:27.603840 | orchestrator | Thursday 09 April 2026 05:37:10 +0000 (0:00:01.525) 0:26:12.063 ******** 2026-04-09 05:37:27.603851 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:37:27.603862 | orchestrator | 2026-04-09 05:37:27.603887 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 05:37:27.603899 | orchestrator | Thursday 09 April 2026 05:37:11 +0000 (0:00:01.149) 0:26:13.213 ******** 2026-04-09 05:37:27.603910 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:37:27.603921 | orchestrator | 2026-04-09 05:37:27.603932 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 05:37:27.603943 | orchestrator | Thursday 09 April 2026 05:37:12 +0000 (0:00:01.476) 0:26:14.690 ******** 2026-04-09 05:37:27.603954 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:37:27.603965 | orchestrator | 2026-04-09 05:37:27.603993 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-09 05:37:27.604005 | orchestrator | Thursday 09 April 2026 05:37:13 +0000 (0:00:01.161) 0:26:15.851 ******** 2026-04-09 05:37:27.604016 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:37:27.604027 | orchestrator | 2026-04-09 05:37:27.604038 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-09 05:37:27.604049 | orchestrator | Thursday 09 April 2026 05:37:15 +0000 (0:00:01.132) 0:26:16.984 ******** 2026-04-09 05:37:27.604060 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:37:27.604071 | orchestrator | 2026-04-09 05:37:27.604082 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-09 05:37:27.604093 | orchestrator | Thursday 09 April 2026 05:37:16 +0000 (0:00:01.187) 0:26:18.172 ******** 2026-04-09 05:37:27.604104 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:37:27.604115 | orchestrator | 2026-04-09 05:37:27.604126 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-09 05:37:27.604137 | orchestrator | Thursday 09 April 2026 05:37:17 +0000 (0:00:01.151) 0:26:19.324 ******** 2026-04-09 05:37:27.604148 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:37:27.604159 | orchestrator | 2026-04-09 05:37:27.604170 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-09 05:37:27.604181 | orchestrator | Thursday 09 April 2026 05:37:18 +0000 (0:00:01.123) 0:26:20.447 ******** 2026-04-09 05:37:27.604192 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:37:27.604203 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-09 05:37:27.604214 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:37:27.604225 | orchestrator | 2026-04-09 05:37:27.604236 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-09 05:37:27.604248 | orchestrator | Thursday 09 April 2026 05:37:20 +0000 (0:00:01.672) 0:26:22.119 ******** 2026-04-09 05:37:27.604267 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:37:27.604286 | orchestrator | 2026-04-09 05:37:27.604305 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-09 05:37:27.604324 | orchestrator | Thursday 09 April 2026 05:37:21 +0000 (0:00:01.276) 0:26:23.395 ******** 2026-04-09 05:37:27.604341 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:37:27.604353 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-09 05:37:27.604364 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:37:27.604375 | orchestrator | 2026-04-09 05:37:27.604395 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-09 05:37:27.604406 | orchestrator | Thursday 09 April 2026 05:37:24 +0000 (0:00:02.839) 0:26:26.235 ******** 2026-04-09 05:37:27.604417 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-09 05:37:27.604429 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-09 05:37:27.604440 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-09 05:37:27.604451 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:37:27.604461 | orchestrator | 2026-04-09 05:37:27.604473 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-09 05:37:27.604484 | orchestrator | Thursday 09 April 2026 05:37:25 +0000 (0:00:01.491) 0:26:27.727 ******** 2026-04-09 05:37:27.604497 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 05:37:27.604511 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 05:37:27.604522 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 05:37:27.604533 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:37:27.604544 | orchestrator | 2026-04-09 05:37:27.604555 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-09 05:37:27.604566 | orchestrator | Thursday 09 April 2026 05:37:27 +0000 (0:00:01.652) 0:26:29.380 ******** 2026-04-09 05:37:27.604579 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:37:27.604620 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:37:27.604642 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:37:47.837453 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:37:47.837666 | orchestrator | 2026-04-09 05:37:47.837691 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-09 05:37:47.837705 | orchestrator | Thursday 09 April 2026 05:37:28 +0000 (0:00:01.202) 0:26:30.582 ******** 2026-04-09 05:37:47.837720 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '69d38aa54653', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-09 05:37:22.088836', 'end': '2026-04-09 05:37:22.137753', 'delta': '0:00:00.048917', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['69d38aa54653'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-09 05:37:47.837762 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '3e7867c40460', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-09 05:37:22.639029', 'end': '2026-04-09 05:37:22.689450', 'delta': '0:00:00.050421', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3e7867c40460'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-09 05:37:47.837775 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '5ed6058fb18c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-09 05:37:23.185810', 'end': '2026-04-09 05:37:23.228881', 'delta': '0:00:00.043071', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5ed6058fb18c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-09 05:37:47.837787 | orchestrator | 2026-04-09 05:37:47.837802 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-09 05:37:47.837820 | orchestrator | Thursday 09 April 2026 05:37:29 +0000 (0:00:01.193) 0:26:31.776 ******** 2026-04-09 05:37:47.837848 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:37:47.837868 | orchestrator | 2026-04-09 05:37:47.837886 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-09 05:37:47.837903 | orchestrator | Thursday 09 April 2026 05:37:31 +0000 (0:00:01.309) 0:26:33.085 ******** 2026-04-09 05:37:47.837922 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:37:47.837940 | orchestrator | 2026-04-09 05:37:47.837958 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-09 05:37:47.837977 | orchestrator | Thursday 09 April 2026 05:37:32 +0000 (0:00:01.321) 0:26:34.408 ******** 2026-04-09 05:37:47.837995 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:37:47.838012 | orchestrator | 2026-04-09 05:37:47.838110 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-09 05:37:47.838129 | orchestrator | Thursday 09 April 2026 05:37:33 +0000 (0:00:01.137) 0:26:35.545 ******** 2026-04-09 05:37:47.838148 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-09 05:37:47.838168 | orchestrator | 2026-04-09 05:37:47.838204 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 05:37:47.838223 | orchestrator | Thursday 09 April 2026 05:37:36 +0000 (0:00:02.402) 0:26:37.948 ******** 2026-04-09 05:37:47.838242 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:37:47.838264 | orchestrator | 2026-04-09 05:37:47.838288 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-09 05:37:47.838309 | orchestrator | Thursday 09 April 2026 05:37:37 +0000 (0:00:01.143) 0:26:39.091 ******** 2026-04-09 05:37:47.838327 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:37:47.838345 | orchestrator | 2026-04-09 05:37:47.838365 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-09 05:37:47.838384 | orchestrator | Thursday 09 April 2026 05:37:38 +0000 (0:00:01.107) 0:26:40.199 ******** 2026-04-09 05:37:47.838402 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:37:47.838434 | orchestrator | 2026-04-09 05:37:47.838453 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 05:37:47.838472 | orchestrator | Thursday 09 April 2026 05:37:39 +0000 (0:00:01.309) 0:26:41.508 ******** 2026-04-09 05:37:47.838490 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:37:47.838510 | orchestrator | 2026-04-09 05:37:47.838557 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-09 05:37:47.838577 | orchestrator | Thursday 09 April 2026 05:37:40 +0000 (0:00:01.132) 0:26:42.641 ******** 2026-04-09 05:37:47.838626 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:37:47.838647 | orchestrator | 2026-04-09 05:37:47.838667 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-09 05:37:47.838687 | orchestrator | Thursday 09 April 2026 05:37:41 +0000 (0:00:01.221) 0:26:43.863 ******** 2026-04-09 05:37:47.838706 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:37:47.838718 | orchestrator | 2026-04-09 05:37:47.838729 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-09 05:37:47.838740 | orchestrator | Thursday 09 April 2026 05:37:43 +0000 (0:00:01.147) 0:26:45.010 ******** 2026-04-09 05:37:47.838751 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:37:47.838761 | orchestrator | 2026-04-09 05:37:47.838772 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-09 05:37:47.838783 | orchestrator | Thursday 09 April 2026 05:37:44 +0000 (0:00:01.141) 0:26:46.151 ******** 2026-04-09 05:37:47.838794 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:37:47.838804 | orchestrator | 2026-04-09 05:37:47.838815 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-09 05:37:47.838826 | orchestrator | Thursday 09 April 2026 05:37:45 +0000 (0:00:01.119) 0:26:47.271 ******** 2026-04-09 05:37:47.838837 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:37:47.838847 | orchestrator | 2026-04-09 05:37:47.838858 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-09 05:37:47.838870 | orchestrator | Thursday 09 April 2026 05:37:46 +0000 (0:00:01.138) 0:26:48.409 ******** 2026-04-09 05:37:47.838881 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:37:47.838892 | orchestrator | 2026-04-09 05:37:47.838902 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-09 05:37:47.838914 | orchestrator | Thursday 09 April 2026 05:37:47 +0000 (0:00:01.138) 0:26:49.548 ******** 2026-04-09 05:37:47.838926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:37:47.838941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:37:47.838952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:37:47.838965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-09 05:37:47.838998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:37:47.839011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:37:47.839033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:37:49.138401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '482e14db', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part16', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part14', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part15', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part1', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 05:37:49.138508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:37:49.138563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:37:49.138578 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:37:49.138591 | orchestrator | 2026-04-09 05:37:49.138649 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-09 05:37:49.138663 | orchestrator | Thursday 09 April 2026 05:37:48 +0000 (0:00:01.317) 0:26:50.865 ******** 2026-04-09 05:37:49.138676 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:37:49.138709 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:37:49.138722 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:37:49.138735 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:37:49.138747 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:37:49.138772 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:37:49.138784 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:37:49.138808 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '482e14db', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part16', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part14', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part15', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part1', 'scsi-SQEMU_QEMU_HARDDISK_482e14db-059a-45b3-acd4-80a1bc5c11af-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:38:24.208302 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:38:24.208446 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:38:24.208463 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:38:24.208476 | orchestrator | 2026-04-09 05:38:24.208488 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-09 05:38:24.208499 | orchestrator | Thursday 09 April 2026 05:37:50 +0000 (0:00:01.225) 0:26:52.091 ******** 2026-04-09 05:38:24.208509 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:38:24.208519 | orchestrator | 2026-04-09 05:38:24.208530 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-09 05:38:24.208539 | orchestrator | Thursday 09 April 2026 05:37:51 +0000 (0:00:01.550) 0:26:53.642 ******** 2026-04-09 05:38:24.208549 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:38:24.208559 | orchestrator | 2026-04-09 05:38:24.208568 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 05:38:24.208578 | orchestrator | Thursday 09 April 2026 05:37:52 +0000 (0:00:01.173) 0:26:54.815 ******** 2026-04-09 05:38:24.208588 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:38:24.208673 | orchestrator | 2026-04-09 05:38:24.208695 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 05:38:24.208706 | orchestrator | Thursday 09 April 2026 05:37:54 +0000 (0:00:01.546) 0:26:56.361 ******** 2026-04-09 05:38:24.208716 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:38:24.208725 | orchestrator | 2026-04-09 05:38:24.208735 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 05:38:24.208745 | orchestrator | Thursday 09 April 2026 05:37:55 +0000 (0:00:01.147) 0:26:57.509 ******** 2026-04-09 05:38:24.208754 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:38:24.208764 | orchestrator | 2026-04-09 05:38:24.208774 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 05:38:24.208783 | orchestrator | Thursday 09 April 2026 05:37:56 +0000 (0:00:01.223) 0:26:58.733 ******** 2026-04-09 05:38:24.208793 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:38:24.208803 | orchestrator | 2026-04-09 05:38:24.208812 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 05:38:24.208822 | orchestrator | Thursday 09 April 2026 05:37:57 +0000 (0:00:01.123) 0:26:59.857 ******** 2026-04-09 05:38:24.208832 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-09 05:38:24.208844 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-09 05:38:24.208861 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-09 05:38:24.208876 | orchestrator | 2026-04-09 05:38:24.208891 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 05:38:24.208908 | orchestrator | Thursday 09 April 2026 05:37:59 +0000 (0:00:01.703) 0:27:01.560 ******** 2026-04-09 05:38:24.208926 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-09 05:38:24.208944 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-09 05:38:24.208961 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-09 05:38:24.208992 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:38:24.209010 | orchestrator | 2026-04-09 05:38:24.209028 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-09 05:38:24.209045 | orchestrator | Thursday 09 April 2026 05:38:00 +0000 (0:00:01.159) 0:27:02.720 ******** 2026-04-09 05:38:24.209063 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:38:24.209081 | orchestrator | 2026-04-09 05:38:24.209099 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-09 05:38:24.209117 | orchestrator | Thursday 09 April 2026 05:38:01 +0000 (0:00:01.142) 0:27:03.862 ******** 2026-04-09 05:38:24.209135 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:38:24.209153 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-09 05:38:24.209171 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:38:24.209188 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 05:38:24.209206 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 05:38:24.209224 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 05:38:24.209262 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 05:38:24.209281 | orchestrator | 2026-04-09 05:38:24.209299 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-09 05:38:24.209317 | orchestrator | Thursday 09 April 2026 05:38:04 +0000 (0:00:02.134) 0:27:05.997 ******** 2026-04-09 05:38:24.209334 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:38:24.209352 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-09 05:38:24.209370 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:38:24.209388 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 05:38:24.209406 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 05:38:24.209424 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 05:38:24.209442 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 05:38:24.209460 | orchestrator | 2026-04-09 05:38:24.209478 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 05:38:24.209495 | orchestrator | Thursday 09 April 2026 05:38:06 +0000 (0:00:02.319) 0:27:08.316 ******** 2026-04-09 05:38:24.209513 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-04-09 05:38:24.209532 | orchestrator | 2026-04-09 05:38:24.209558 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 05:38:24.209575 | orchestrator | Thursday 09 April 2026 05:38:07 +0000 (0:00:01.276) 0:27:09.593 ******** 2026-04-09 05:38:24.209593 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-04-09 05:38:24.209635 | orchestrator | 2026-04-09 05:38:24.209653 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 05:38:24.209671 | orchestrator | Thursday 09 April 2026 05:38:08 +0000 (0:00:01.154) 0:27:10.747 ******** 2026-04-09 05:38:24.209689 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:38:24.209707 | orchestrator | 2026-04-09 05:38:24.209724 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 05:38:24.209742 | orchestrator | Thursday 09 April 2026 05:38:10 +0000 (0:00:01.533) 0:27:12.281 ******** 2026-04-09 05:38:24.209760 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:38:24.209778 | orchestrator | 2026-04-09 05:38:24.209795 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 05:38:24.209822 | orchestrator | Thursday 09 April 2026 05:38:11 +0000 (0:00:01.170) 0:27:13.452 ******** 2026-04-09 05:38:24.209839 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:38:24.209856 | orchestrator | 2026-04-09 05:38:24.209873 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 05:38:24.209889 | orchestrator | Thursday 09 April 2026 05:38:12 +0000 (0:00:01.199) 0:27:14.651 ******** 2026-04-09 05:38:24.209901 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:38:24.209911 | orchestrator | 2026-04-09 05:38:24.209920 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 05:38:24.209930 | orchestrator | Thursday 09 April 2026 05:38:13 +0000 (0:00:01.154) 0:27:15.806 ******** 2026-04-09 05:38:24.209940 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:38:24.209949 | orchestrator | 2026-04-09 05:38:24.209959 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 05:38:24.209968 | orchestrator | Thursday 09 April 2026 05:38:15 +0000 (0:00:01.595) 0:27:17.401 ******** 2026-04-09 05:38:24.209978 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:38:24.209988 | orchestrator | 2026-04-09 05:38:24.209997 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 05:38:24.210007 | orchestrator | Thursday 09 April 2026 05:38:16 +0000 (0:00:01.109) 0:27:18.511 ******** 2026-04-09 05:38:24.210076 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:38:24.210087 | orchestrator | 2026-04-09 05:38:24.210097 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 05:38:24.210107 | orchestrator | Thursday 09 April 2026 05:38:17 +0000 (0:00:01.188) 0:27:19.700 ******** 2026-04-09 05:38:24.210117 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:38:24.210127 | orchestrator | 2026-04-09 05:38:24.210136 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 05:38:24.210146 | orchestrator | Thursday 09 April 2026 05:38:19 +0000 (0:00:01.541) 0:27:21.241 ******** 2026-04-09 05:38:24.210156 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:38:24.210166 | orchestrator | 2026-04-09 05:38:24.210175 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 05:38:24.210185 | orchestrator | Thursday 09 April 2026 05:38:20 +0000 (0:00:01.592) 0:27:22.834 ******** 2026-04-09 05:38:24.210195 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:38:24.210205 | orchestrator | 2026-04-09 05:38:24.210215 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 05:38:24.210225 | orchestrator | Thursday 09 April 2026 05:38:21 +0000 (0:00:00.753) 0:27:23.587 ******** 2026-04-09 05:38:24.210235 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:38:24.210244 | orchestrator | 2026-04-09 05:38:24.210254 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 05:38:24.210264 | orchestrator | Thursday 09 April 2026 05:38:22 +0000 (0:00:00.884) 0:27:24.471 ******** 2026-04-09 05:38:24.210274 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:38:24.210284 | orchestrator | 2026-04-09 05:38:24.210294 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 05:38:24.210303 | orchestrator | Thursday 09 April 2026 05:38:23 +0000 (0:00:00.753) 0:27:25.225 ******** 2026-04-09 05:38:24.210313 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:38:24.210328 | orchestrator | 2026-04-09 05:38:24.210345 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 05:38:24.210360 | orchestrator | Thursday 09 April 2026 05:38:24 +0000 (0:00:00.791) 0:27:26.016 ******** 2026-04-09 05:38:24.210389 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.867431 | orchestrator | 2026-04-09 05:39:04.867546 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 05:39:04.867563 | orchestrator | Thursday 09 April 2026 05:38:24 +0000 (0:00:00.783) 0:27:26.800 ******** 2026-04-09 05:39:04.867576 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.867588 | orchestrator | 2026-04-09 05:39:04.867599 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 05:39:04.867665 | orchestrator | Thursday 09 April 2026 05:38:25 +0000 (0:00:00.783) 0:27:27.583 ******** 2026-04-09 05:39:04.867678 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.867689 | orchestrator | 2026-04-09 05:39:04.867700 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 05:39:04.867711 | orchestrator | Thursday 09 April 2026 05:38:26 +0000 (0:00:00.768) 0:27:28.352 ******** 2026-04-09 05:39:04.867722 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:39:04.867734 | orchestrator | 2026-04-09 05:39:04.867745 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 05:39:04.867756 | orchestrator | Thursday 09 April 2026 05:38:27 +0000 (0:00:00.847) 0:27:29.199 ******** 2026-04-09 05:39:04.867767 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:39:04.867778 | orchestrator | 2026-04-09 05:39:04.867790 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 05:39:04.867801 | orchestrator | Thursday 09 April 2026 05:38:28 +0000 (0:00:00.793) 0:27:29.992 ******** 2026-04-09 05:39:04.867812 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:39:04.867823 | orchestrator | 2026-04-09 05:39:04.867834 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-09 05:39:04.867860 | orchestrator | Thursday 09 April 2026 05:38:28 +0000 (0:00:00.774) 0:27:30.766 ******** 2026-04-09 05:39:04.867872 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.867883 | orchestrator | 2026-04-09 05:39:04.867894 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-09 05:39:04.867904 | orchestrator | Thursday 09 April 2026 05:38:29 +0000 (0:00:00.771) 0:27:31.538 ******** 2026-04-09 05:39:04.867915 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.867926 | orchestrator | 2026-04-09 05:39:04.867937 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-09 05:39:04.867948 | orchestrator | Thursday 09 April 2026 05:38:30 +0000 (0:00:00.801) 0:27:32.339 ******** 2026-04-09 05:39:04.867962 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.867975 | orchestrator | 2026-04-09 05:39:04.867988 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-09 05:39:04.868001 | orchestrator | Thursday 09 April 2026 05:38:31 +0000 (0:00:00.787) 0:27:33.127 ******** 2026-04-09 05:39:04.868014 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.868027 | orchestrator | 2026-04-09 05:39:04.868040 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-09 05:39:04.868054 | orchestrator | Thursday 09 April 2026 05:38:32 +0000 (0:00:00.831) 0:27:33.958 ******** 2026-04-09 05:39:04.868067 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.868080 | orchestrator | 2026-04-09 05:39:04.868093 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-09 05:39:04.868105 | orchestrator | Thursday 09 April 2026 05:38:32 +0000 (0:00:00.803) 0:27:34.762 ******** 2026-04-09 05:39:04.868119 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.868132 | orchestrator | 2026-04-09 05:39:04.868145 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-09 05:39:04.868158 | orchestrator | Thursday 09 April 2026 05:38:33 +0000 (0:00:00.767) 0:27:35.530 ******** 2026-04-09 05:39:04.868171 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.868183 | orchestrator | 2026-04-09 05:39:04.868197 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-09 05:39:04.868211 | orchestrator | Thursday 09 April 2026 05:38:34 +0000 (0:00:00.852) 0:27:36.382 ******** 2026-04-09 05:39:04.868224 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.868239 | orchestrator | 2026-04-09 05:39:04.868252 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-09 05:39:04.868266 | orchestrator | Thursday 09 April 2026 05:38:35 +0000 (0:00:00.782) 0:27:37.164 ******** 2026-04-09 05:39:04.868279 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.868292 | orchestrator | 2026-04-09 05:39:04.868305 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-09 05:39:04.868333 | orchestrator | Thursday 09 April 2026 05:38:36 +0000 (0:00:00.795) 0:27:37.960 ******** 2026-04-09 05:39:04.868352 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.868372 | orchestrator | 2026-04-09 05:39:04.868392 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-09 05:39:04.868412 | orchestrator | Thursday 09 April 2026 05:38:36 +0000 (0:00:00.793) 0:27:38.753 ******** 2026-04-09 05:39:04.868430 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.868451 | orchestrator | 2026-04-09 05:39:04.868471 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-09 05:39:04.868491 | orchestrator | Thursday 09 April 2026 05:38:37 +0000 (0:00:00.812) 0:27:39.566 ******** 2026-04-09 05:39:04.868510 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.868529 | orchestrator | 2026-04-09 05:39:04.868540 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-09 05:39:04.868551 | orchestrator | Thursday 09 April 2026 05:38:38 +0000 (0:00:00.831) 0:27:40.397 ******** 2026-04-09 05:39:04.868562 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:39:04.868573 | orchestrator | 2026-04-09 05:39:04.868583 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-09 05:39:04.868595 | orchestrator | Thursday 09 April 2026 05:38:40 +0000 (0:00:01.611) 0:27:42.009 ******** 2026-04-09 05:39:04.868778 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:39:04.868797 | orchestrator | 2026-04-09 05:39:04.868808 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-09 05:39:04.868819 | orchestrator | Thursday 09 April 2026 05:38:42 +0000 (0:00:02.022) 0:27:44.032 ******** 2026-04-09 05:39:04.868830 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-04-09 05:39:04.868843 | orchestrator | 2026-04-09 05:39:04.868876 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-09 05:39:04.868888 | orchestrator | Thursday 09 April 2026 05:38:43 +0000 (0:00:01.133) 0:27:45.166 ******** 2026-04-09 05:39:04.868899 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.868910 | orchestrator | 2026-04-09 05:39:04.868921 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-09 05:39:04.868932 | orchestrator | Thursday 09 April 2026 05:38:44 +0000 (0:00:01.269) 0:27:46.435 ******** 2026-04-09 05:39:04.868943 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.868954 | orchestrator | 2026-04-09 05:39:04.868965 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-09 05:39:04.868976 | orchestrator | Thursday 09 April 2026 05:38:45 +0000 (0:00:01.137) 0:27:47.572 ******** 2026-04-09 05:39:04.868987 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 05:39:04.868998 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 05:39:04.869009 | orchestrator | 2026-04-09 05:39:04.869020 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-09 05:39:04.869031 | orchestrator | Thursday 09 April 2026 05:38:47 +0000 (0:00:01.818) 0:27:49.391 ******** 2026-04-09 05:39:04.869042 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:39:04.869053 | orchestrator | 2026-04-09 05:39:04.869064 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-09 05:39:04.869075 | orchestrator | Thursday 09 April 2026 05:38:48 +0000 (0:00:01.456) 0:27:50.848 ******** 2026-04-09 05:39:04.869094 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.869106 | orchestrator | 2026-04-09 05:39:04.869117 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-09 05:39:04.869128 | orchestrator | Thursday 09 April 2026 05:38:50 +0000 (0:00:01.225) 0:27:52.073 ******** 2026-04-09 05:39:04.869139 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.869150 | orchestrator | 2026-04-09 05:39:04.869161 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-09 05:39:04.869172 | orchestrator | Thursday 09 April 2026 05:38:51 +0000 (0:00:00.853) 0:27:52.927 ******** 2026-04-09 05:39:04.869194 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.869205 | orchestrator | 2026-04-09 05:39:04.869220 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-09 05:39:04.869231 | orchestrator | Thursday 09 April 2026 05:38:51 +0000 (0:00:00.771) 0:27:53.698 ******** 2026-04-09 05:39:04.869242 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-04-09 05:39:04.869253 | orchestrator | 2026-04-09 05:39:04.869264 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-09 05:39:04.869275 | orchestrator | Thursday 09 April 2026 05:38:52 +0000 (0:00:01.114) 0:27:54.813 ******** 2026-04-09 05:39:04.869286 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:39:04.869297 | orchestrator | 2026-04-09 05:39:04.869308 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-09 05:39:04.869319 | orchestrator | Thursday 09 April 2026 05:38:54 +0000 (0:00:01.899) 0:27:56.713 ******** 2026-04-09 05:39:04.869330 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 05:39:04.869341 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 05:39:04.869352 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 05:39:04.869363 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.869374 | orchestrator | 2026-04-09 05:39:04.869385 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-09 05:39:04.869395 | orchestrator | Thursday 09 April 2026 05:38:55 +0000 (0:00:01.141) 0:27:57.854 ******** 2026-04-09 05:39:04.869406 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.869417 | orchestrator | 2026-04-09 05:39:04.869428 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-09 05:39:04.869439 | orchestrator | Thursday 09 April 2026 05:38:57 +0000 (0:00:01.255) 0:27:59.110 ******** 2026-04-09 05:39:04.869450 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.869461 | orchestrator | 2026-04-09 05:39:04.869471 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-09 05:39:04.869482 | orchestrator | Thursday 09 April 2026 05:38:58 +0000 (0:00:01.174) 0:28:00.285 ******** 2026-04-09 05:39:04.869493 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.869504 | orchestrator | 2026-04-09 05:39:04.869515 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-09 05:39:04.869526 | orchestrator | Thursday 09 April 2026 05:38:59 +0000 (0:00:01.184) 0:28:01.470 ******** 2026-04-09 05:39:04.869537 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.869548 | orchestrator | 2026-04-09 05:39:04.869559 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-09 05:39:04.869570 | orchestrator | Thursday 09 April 2026 05:39:00 +0000 (0:00:01.162) 0:28:02.632 ******** 2026-04-09 05:39:04.869581 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:04.869592 | orchestrator | 2026-04-09 05:39:04.869636 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-09 05:39:04.869649 | orchestrator | Thursday 09 April 2026 05:39:01 +0000 (0:00:00.784) 0:28:03.417 ******** 2026-04-09 05:39:04.869660 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:39:04.869672 | orchestrator | 2026-04-09 05:39:04.869690 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-09 05:39:04.869709 | orchestrator | Thursday 09 April 2026 05:39:03 +0000 (0:00:02.252) 0:28:05.669 ******** 2026-04-09 05:39:04.869727 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:39:04.869745 | orchestrator | 2026-04-09 05:39:04.869765 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-09 05:39:04.869783 | orchestrator | Thursday 09 April 2026 05:39:04 +0000 (0:00:00.820) 0:28:06.489 ******** 2026-04-09 05:39:04.869803 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-04-09 05:39:04.869821 | orchestrator | 2026-04-09 05:39:04.869850 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-09 05:39:41.761232 | orchestrator | Thursday 09 April 2026 05:39:05 +0000 (0:00:01.174) 0:28:07.664 ******** 2026-04-09 05:39:41.761374 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.761403 | orchestrator | 2026-04-09 05:39:41.761424 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-09 05:39:41.761443 | orchestrator | Thursday 09 April 2026 05:39:06 +0000 (0:00:01.145) 0:28:08.809 ******** 2026-04-09 05:39:41.761459 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.761477 | orchestrator | 2026-04-09 05:39:41.761495 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-09 05:39:41.761515 | orchestrator | Thursday 09 April 2026 05:39:08 +0000 (0:00:01.188) 0:28:09.998 ******** 2026-04-09 05:39:41.761533 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.761551 | orchestrator | 2026-04-09 05:39:41.761569 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-09 05:39:41.761589 | orchestrator | Thursday 09 April 2026 05:39:09 +0000 (0:00:01.135) 0:28:11.133 ******** 2026-04-09 05:39:41.761646 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.761667 | orchestrator | 2026-04-09 05:39:41.761684 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-09 05:39:41.761702 | orchestrator | Thursday 09 April 2026 05:39:10 +0000 (0:00:01.139) 0:28:12.273 ******** 2026-04-09 05:39:41.761721 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.761739 | orchestrator | 2026-04-09 05:39:41.761759 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-09 05:39:41.761799 | orchestrator | Thursday 09 April 2026 05:39:11 +0000 (0:00:01.167) 0:28:13.440 ******** 2026-04-09 05:39:41.761819 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.761839 | orchestrator | 2026-04-09 05:39:41.761859 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-09 05:39:41.761878 | orchestrator | Thursday 09 April 2026 05:39:12 +0000 (0:00:01.201) 0:28:14.642 ******** 2026-04-09 05:39:41.761896 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.761916 | orchestrator | 2026-04-09 05:39:41.761937 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-09 05:39:41.761957 | orchestrator | Thursday 09 April 2026 05:39:13 +0000 (0:00:01.195) 0:28:15.837 ******** 2026-04-09 05:39:41.761976 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.761995 | orchestrator | 2026-04-09 05:39:41.762013 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-09 05:39:41.762107 | orchestrator | Thursday 09 April 2026 05:39:15 +0000 (0:00:01.145) 0:28:16.983 ******** 2026-04-09 05:39:41.762169 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:39:41.762192 | orchestrator | 2026-04-09 05:39:41.762210 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-09 05:39:41.762230 | orchestrator | Thursday 09 April 2026 05:39:15 +0000 (0:00:00.809) 0:28:17.792 ******** 2026-04-09 05:39:41.762249 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-04-09 05:39:41.762268 | orchestrator | 2026-04-09 05:39:41.762287 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-09 05:39:41.762305 | orchestrator | Thursday 09 April 2026 05:39:17 +0000 (0:00:01.115) 0:28:18.907 ******** 2026-04-09 05:39:41.762325 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-04-09 05:39:41.762344 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-09 05:39:41.762363 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-09 05:39:41.762382 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-09 05:39:41.762400 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-09 05:39:41.762419 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-09 05:39:41.762438 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-09 05:39:41.762457 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-09 05:39:41.762508 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 05:39:41.762529 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 05:39:41.762547 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 05:39:41.762565 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 05:39:41.762583 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 05:39:41.762602 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 05:39:41.762655 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-04-09 05:39:41.762673 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-04-09 05:39:41.762692 | orchestrator | 2026-04-09 05:39:41.762711 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-09 05:39:41.762729 | orchestrator | Thursday 09 April 2026 05:39:23 +0000 (0:00:06.535) 0:28:25.443 ******** 2026-04-09 05:39:41.762747 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.762765 | orchestrator | 2026-04-09 05:39:41.762783 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-09 05:39:41.762804 | orchestrator | Thursday 09 April 2026 05:39:24 +0000 (0:00:00.768) 0:28:26.211 ******** 2026-04-09 05:39:41.762823 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.762841 | orchestrator | 2026-04-09 05:39:41.762860 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-09 05:39:41.762878 | orchestrator | Thursday 09 April 2026 05:39:25 +0000 (0:00:00.782) 0:28:26.993 ******** 2026-04-09 05:39:41.762897 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.762915 | orchestrator | 2026-04-09 05:39:41.762934 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-09 05:39:41.762952 | orchestrator | Thursday 09 April 2026 05:39:25 +0000 (0:00:00.792) 0:28:27.786 ******** 2026-04-09 05:39:41.762971 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.762989 | orchestrator | 2026-04-09 05:39:41.763008 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-09 05:39:41.763054 | orchestrator | Thursday 09 April 2026 05:39:26 +0000 (0:00:00.793) 0:28:28.579 ******** 2026-04-09 05:39:41.763074 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.763093 | orchestrator | 2026-04-09 05:39:41.763111 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-09 05:39:41.763130 | orchestrator | Thursday 09 April 2026 05:39:27 +0000 (0:00:00.783) 0:28:29.363 ******** 2026-04-09 05:39:41.763149 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.763168 | orchestrator | 2026-04-09 05:39:41.763186 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-09 05:39:41.763205 | orchestrator | Thursday 09 April 2026 05:39:28 +0000 (0:00:00.803) 0:28:30.167 ******** 2026-04-09 05:39:41.763224 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.763243 | orchestrator | 2026-04-09 05:39:41.763261 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-09 05:39:41.763279 | orchestrator | Thursday 09 April 2026 05:39:29 +0000 (0:00:00.774) 0:28:30.941 ******** 2026-04-09 05:39:41.763299 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.763317 | orchestrator | 2026-04-09 05:39:41.763335 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-09 05:39:41.763354 | orchestrator | Thursday 09 April 2026 05:39:29 +0000 (0:00:00.821) 0:28:31.763 ******** 2026-04-09 05:39:41.763372 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.763390 | orchestrator | 2026-04-09 05:39:41.763423 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-09 05:39:41.763436 | orchestrator | Thursday 09 April 2026 05:39:30 +0000 (0:00:00.792) 0:28:32.555 ******** 2026-04-09 05:39:41.763446 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.763457 | orchestrator | 2026-04-09 05:39:41.763482 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-09 05:39:41.763493 | orchestrator | Thursday 09 April 2026 05:39:31 +0000 (0:00:00.783) 0:28:33.339 ******** 2026-04-09 05:39:41.763504 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.763515 | orchestrator | 2026-04-09 05:39:41.763526 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-09 05:39:41.763536 | orchestrator | Thursday 09 April 2026 05:39:32 +0000 (0:00:00.812) 0:28:34.152 ******** 2026-04-09 05:39:41.763547 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.763558 | orchestrator | 2026-04-09 05:39:41.763569 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-09 05:39:41.763580 | orchestrator | Thursday 09 April 2026 05:39:33 +0000 (0:00:00.776) 0:28:34.929 ******** 2026-04-09 05:39:41.763591 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.763602 | orchestrator | 2026-04-09 05:39:41.763640 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-09 05:39:41.763652 | orchestrator | Thursday 09 April 2026 05:39:33 +0000 (0:00:00.872) 0:28:35.801 ******** 2026-04-09 05:39:41.763663 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.763674 | orchestrator | 2026-04-09 05:39:41.763685 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-09 05:39:41.763696 | orchestrator | Thursday 09 April 2026 05:39:34 +0000 (0:00:00.836) 0:28:36.637 ******** 2026-04-09 05:39:41.763707 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.763717 | orchestrator | 2026-04-09 05:39:41.763728 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-09 05:39:41.763739 | orchestrator | Thursday 09 April 2026 05:39:35 +0000 (0:00:00.910) 0:28:37.547 ******** 2026-04-09 05:39:41.763750 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.763761 | orchestrator | 2026-04-09 05:39:41.763771 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-09 05:39:41.763782 | orchestrator | Thursday 09 April 2026 05:39:36 +0000 (0:00:00.779) 0:28:38.327 ******** 2026-04-09 05:39:41.763793 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.763804 | orchestrator | 2026-04-09 05:39:41.763815 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 05:39:41.763827 | orchestrator | Thursday 09 April 2026 05:39:37 +0000 (0:00:00.782) 0:28:39.110 ******** 2026-04-09 05:39:41.763838 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.763849 | orchestrator | 2026-04-09 05:39:41.763860 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 05:39:41.763870 | orchestrator | Thursday 09 April 2026 05:39:38 +0000 (0:00:00.775) 0:28:39.886 ******** 2026-04-09 05:39:41.763881 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.763892 | orchestrator | 2026-04-09 05:39:41.763903 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 05:39:41.763914 | orchestrator | Thursday 09 April 2026 05:39:38 +0000 (0:00:00.776) 0:28:40.662 ******** 2026-04-09 05:39:41.763924 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.763935 | orchestrator | 2026-04-09 05:39:41.763946 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 05:39:41.763957 | orchestrator | Thursday 09 April 2026 05:39:39 +0000 (0:00:00.792) 0:28:41.455 ******** 2026-04-09 05:39:41.763967 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.763978 | orchestrator | 2026-04-09 05:39:41.763989 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 05:39:41.764000 | orchestrator | Thursday 09 April 2026 05:39:40 +0000 (0:00:00.830) 0:28:42.286 ******** 2026-04-09 05:39:41.764011 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-09 05:39:41.764022 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-09 05:39:41.764033 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-09 05:39:41.764044 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:39:41.764061 | orchestrator | 2026-04-09 05:39:41.764072 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 05:39:41.764083 | orchestrator | Thursday 09 April 2026 05:39:41 +0000 (0:00:01.091) 0:28:43.377 ******** 2026-04-09 05:39:41.764094 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-09 05:39:41.764116 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-09 05:40:39.716871 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-09 05:40:39.716952 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:40:39.716958 | orchestrator | 2026-04-09 05:40:39.716964 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 05:40:39.716970 | orchestrator | Thursday 09 April 2026 05:39:42 +0000 (0:00:01.079) 0:28:44.457 ******** 2026-04-09 05:40:39.716974 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-09 05:40:39.716978 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-09 05:40:39.716982 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-09 05:40:39.716986 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:40:39.716990 | orchestrator | 2026-04-09 05:40:39.716994 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 05:40:39.716999 | orchestrator | Thursday 09 April 2026 05:39:43 +0000 (0:00:01.156) 0:28:45.613 ******** 2026-04-09 05:40:39.717003 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:40:39.717006 | orchestrator | 2026-04-09 05:40:39.717010 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 05:40:39.717014 | orchestrator | Thursday 09 April 2026 05:39:44 +0000 (0:00:00.807) 0:28:46.420 ******** 2026-04-09 05:40:39.717019 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-09 05:40:39.717022 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:40:39.717026 | orchestrator | 2026-04-09 05:40:39.717041 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-09 05:40:39.717045 | orchestrator | Thursday 09 April 2026 05:39:45 +0000 (0:00:00.938) 0:28:47.359 ******** 2026-04-09 05:40:39.717049 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:40:39.717053 | orchestrator | 2026-04-09 05:40:39.717057 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-09 05:40:39.717060 | orchestrator | Thursday 09 April 2026 05:39:46 +0000 (0:00:01.394) 0:28:48.753 ******** 2026-04-09 05:40:39.717064 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:40:39.717069 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-09 05:40:39.717073 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:40:39.717077 | orchestrator | 2026-04-09 05:40:39.717080 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-09 05:40:39.717084 | orchestrator | Thursday 09 April 2026 05:39:48 +0000 (0:00:01.673) 0:28:50.427 ******** 2026-04-09 05:40:39.717088 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-1 2026-04-09 05:40:39.717092 | orchestrator | 2026-04-09 05:40:39.717096 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-09 05:40:39.717100 | orchestrator | Thursday 09 April 2026 05:39:49 +0000 (0:00:01.124) 0:28:51.551 ******** 2026-04-09 05:40:39.717103 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:40:39.717107 | orchestrator | 2026-04-09 05:40:39.717111 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-09 05:40:39.717115 | orchestrator | Thursday 09 April 2026 05:39:51 +0000 (0:00:01.509) 0:28:53.061 ******** 2026-04-09 05:40:39.717119 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:40:39.717123 | orchestrator | 2026-04-09 05:40:39.717127 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-09 05:40:39.717131 | orchestrator | Thursday 09 April 2026 05:39:52 +0000 (0:00:01.222) 0:28:54.284 ******** 2026-04-09 05:40:39.717135 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 05:40:39.717153 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 05:40:39.717157 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 05:40:39.717161 | orchestrator | ok: [testbed-node-1 -> {{ groups[mon_group_name][0] }}] 2026-04-09 05:40:39.717165 | orchestrator | 2026-04-09 05:40:39.717168 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-09 05:40:39.717172 | orchestrator | Thursday 09 April 2026 05:39:59 +0000 (0:00:07.537) 0:29:01.821 ******** 2026-04-09 05:40:39.717176 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:40:39.717180 | orchestrator | 2026-04-09 05:40:39.717184 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-09 05:40:39.717187 | orchestrator | Thursday 09 April 2026 05:40:01 +0000 (0:00:01.181) 0:29:03.003 ******** 2026-04-09 05:40:39.717191 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-09 05:40:39.717195 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 05:40:39.717199 | orchestrator | 2026-04-09 05:40:39.717203 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-09 05:40:39.717206 | orchestrator | Thursday 09 April 2026 05:40:04 +0000 (0:00:03.266) 0:29:06.269 ******** 2026-04-09 05:40:39.717210 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-09 05:40:39.717214 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-09 05:40:39.717218 | orchestrator | 2026-04-09 05:40:39.717222 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-09 05:40:39.717226 | orchestrator | Thursday 09 April 2026 05:40:06 +0000 (0:00:01.975) 0:29:08.245 ******** 2026-04-09 05:40:39.717229 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:40:39.717233 | orchestrator | 2026-04-09 05:40:39.717237 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-09 05:40:39.717241 | orchestrator | Thursday 09 April 2026 05:40:07 +0000 (0:00:01.484) 0:29:09.730 ******** 2026-04-09 05:40:39.717245 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:40:39.717248 | orchestrator | 2026-04-09 05:40:39.717252 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-09 05:40:39.717256 | orchestrator | Thursday 09 April 2026 05:40:08 +0000 (0:00:00.784) 0:29:10.515 ******** 2026-04-09 05:40:39.717260 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:40:39.717263 | orchestrator | 2026-04-09 05:40:39.717267 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-09 05:40:39.717280 | orchestrator | Thursday 09 April 2026 05:40:09 +0000 (0:00:00.732) 0:29:11.248 ******** 2026-04-09 05:40:39.717285 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1 2026-04-09 05:40:39.717288 | orchestrator | 2026-04-09 05:40:39.717292 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-09 05:40:39.717296 | orchestrator | Thursday 09 April 2026 05:40:10 +0000 (0:00:01.075) 0:29:12.323 ******** 2026-04-09 05:40:39.717300 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:40:39.717304 | orchestrator | 2026-04-09 05:40:39.717307 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-09 05:40:39.717311 | orchestrator | Thursday 09 April 2026 05:40:11 +0000 (0:00:01.157) 0:29:13.481 ******** 2026-04-09 05:40:39.717315 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:40:39.717319 | orchestrator | 2026-04-09 05:40:39.717323 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-09 05:40:39.717326 | orchestrator | Thursday 09 April 2026 05:40:12 +0000 (0:00:01.225) 0:29:14.706 ******** 2026-04-09 05:40:39.717330 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1 2026-04-09 05:40:39.717334 | orchestrator | 2026-04-09 05:40:39.717338 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-09 05:40:39.717342 | orchestrator | Thursday 09 April 2026 05:40:14 +0000 (0:00:01.226) 0:29:15.933 ******** 2026-04-09 05:40:39.717352 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:40:39.717356 | orchestrator | 2026-04-09 05:40:39.717360 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-09 05:40:39.717364 | orchestrator | Thursday 09 April 2026 05:40:16 +0000 (0:00:02.064) 0:29:17.997 ******** 2026-04-09 05:40:39.717367 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:40:39.717371 | orchestrator | 2026-04-09 05:40:39.717375 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-09 05:40:39.717379 | orchestrator | Thursday 09 April 2026 05:40:18 +0000 (0:00:02.019) 0:29:20.017 ******** 2026-04-09 05:40:39.717383 | orchestrator | ok: [testbed-node-1] 2026-04-09 05:40:39.717386 | orchestrator | 2026-04-09 05:40:39.717390 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-09 05:40:39.717394 | orchestrator | Thursday 09 April 2026 05:40:20 +0000 (0:00:02.389) 0:29:22.407 ******** 2026-04-09 05:40:39.717398 | orchestrator | changed: [testbed-node-1] 2026-04-09 05:40:39.717402 | orchestrator | 2026-04-09 05:40:39.717405 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-09 05:40:39.717410 | orchestrator | Thursday 09 April 2026 05:40:24 +0000 (0:00:03.518) 0:29:25.925 ******** 2026-04-09 05:40:39.717416 | orchestrator | skipping: [testbed-node-1] 2026-04-09 05:40:39.717422 | orchestrator | 2026-04-09 05:40:39.717428 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-04-09 05:40:39.717434 | orchestrator | 2026-04-09 05:40:39.717440 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-09 05:40:39.717446 | orchestrator | Thursday 09 April 2026 05:40:25 +0000 (0:00:01.005) 0:29:26.930 ******** 2026-04-09 05:40:39.717452 | orchestrator | changed: [testbed-node-2] 2026-04-09 05:40:39.717457 | orchestrator | 2026-04-09 05:40:39.717463 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-04-09 05:40:39.717469 | orchestrator | Thursday 09 April 2026 05:40:27 +0000 (0:00:02.509) 0:29:29.440 ******** 2026-04-09 05:40:39.717475 | orchestrator | changed: [testbed-node-2] 2026-04-09 05:40:39.717480 | orchestrator | 2026-04-09 05:40:39.717487 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 05:40:39.717492 | orchestrator | Thursday 09 April 2026 05:40:29 +0000 (0:00:02.036) 0:29:31.477 ******** 2026-04-09 05:40:39.717498 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-04-09 05:40:39.717504 | orchestrator | 2026-04-09 05:40:39.717510 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-09 05:40:39.717516 | orchestrator | Thursday 09 April 2026 05:40:30 +0000 (0:00:01.141) 0:29:32.619 ******** 2026-04-09 05:40:39.717522 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:40:39.717529 | orchestrator | 2026-04-09 05:40:39.717534 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-09 05:40:39.717538 | orchestrator | Thursday 09 April 2026 05:40:32 +0000 (0:00:01.467) 0:29:34.087 ******** 2026-04-09 05:40:39.717542 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:40:39.717546 | orchestrator | 2026-04-09 05:40:39.717550 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 05:40:39.717553 | orchestrator | Thursday 09 April 2026 05:40:33 +0000 (0:00:01.150) 0:29:35.238 ******** 2026-04-09 05:40:39.717557 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:40:39.717561 | orchestrator | 2026-04-09 05:40:39.717565 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 05:40:39.717569 | orchestrator | Thursday 09 April 2026 05:40:34 +0000 (0:00:01.519) 0:29:36.758 ******** 2026-04-09 05:40:39.717572 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:40:39.717576 | orchestrator | 2026-04-09 05:40:39.717580 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-09 05:40:39.717584 | orchestrator | Thursday 09 April 2026 05:40:36 +0000 (0:00:01.133) 0:29:37.892 ******** 2026-04-09 05:40:39.717587 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:40:39.717591 | orchestrator | 2026-04-09 05:40:39.717595 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-09 05:40:39.717602 | orchestrator | Thursday 09 April 2026 05:40:37 +0000 (0:00:01.134) 0:29:39.027 ******** 2026-04-09 05:40:39.717606 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:40:39.717610 | orchestrator | 2026-04-09 05:40:39.717614 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-09 05:40:39.717650 | orchestrator | Thursday 09 April 2026 05:40:38 +0000 (0:00:01.146) 0:29:40.173 ******** 2026-04-09 05:40:39.717655 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:40:39.717658 | orchestrator | 2026-04-09 05:40:39.717662 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-09 05:40:39.717666 | orchestrator | Thursday 09 April 2026 05:40:39 +0000 (0:00:01.236) 0:29:41.410 ******** 2026-04-09 05:40:39.717670 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:40:39.717674 | orchestrator | 2026-04-09 05:40:39.717682 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-09 05:41:04.569029 | orchestrator | Thursday 09 April 2026 05:40:40 +0000 (0:00:01.188) 0:29:42.598 ******** 2026-04-09 05:41:04.569162 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:41:04.569179 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:41:04.569191 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-09 05:41:04.569204 | orchestrator | 2026-04-09 05:41:04.569231 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-09 05:41:04.570111 | orchestrator | Thursday 09 April 2026 05:40:42 +0000 (0:00:01.765) 0:29:44.364 ******** 2026-04-09 05:41:04.570210 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:41:04.570226 | orchestrator | 2026-04-09 05:41:04.570239 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-09 05:41:04.570250 | orchestrator | Thursday 09 April 2026 05:40:43 +0000 (0:00:01.238) 0:29:45.602 ******** 2026-04-09 05:41:04.570262 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:41:04.570274 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:41:04.570285 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-09 05:41:04.570297 | orchestrator | 2026-04-09 05:41:04.570308 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-09 05:41:04.570319 | orchestrator | Thursday 09 April 2026 05:40:46 +0000 (0:00:03.188) 0:29:48.791 ******** 2026-04-09 05:41:04.570330 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-09 05:41:04.570342 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-09 05:41:04.570353 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-09 05:41:04.570364 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:04.570375 | orchestrator | 2026-04-09 05:41:04.570386 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-09 05:41:04.570397 | orchestrator | Thursday 09 April 2026 05:40:48 +0000 (0:00:01.451) 0:29:50.243 ******** 2026-04-09 05:41:04.570410 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 05:41:04.570424 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 05:41:04.570436 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 05:41:04.570447 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:04.570458 | orchestrator | 2026-04-09 05:41:04.570469 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-09 05:41:04.570510 | orchestrator | Thursday 09 April 2026 05:40:50 +0000 (0:00:01.940) 0:29:52.183 ******** 2026-04-09 05:41:04.570524 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:41:04.570538 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:41:04.570550 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:41:04.570561 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:04.570572 | orchestrator | 2026-04-09 05:41:04.570583 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-09 05:41:04.570594 | orchestrator | Thursday 09 April 2026 05:40:51 +0000 (0:00:01.142) 0:29:53.325 ******** 2026-04-09 05:41:04.570661 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '69d38aa54653', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-09 05:40:44.229499', 'end': '2026-04-09 05:40:44.282295', 'delta': '0:00:00.052796', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['69d38aa54653'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-09 05:41:04.570801 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '3e7867c40460', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-09 05:40:44.796675', 'end': '2026-04-09 05:40:44.840159', 'delta': '0:00:00.043484', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3e7867c40460'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-09 05:41:04.570824 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '5ed6058fb18c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-09 05:40:45.714898', 'end': '2026-04-09 05:40:45.757762', 'delta': '0:00:00.042864', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5ed6058fb18c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-09 05:41:04.570847 | orchestrator | 2026-04-09 05:41:04.570858 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-09 05:41:04.570870 | orchestrator | Thursday 09 April 2026 05:40:52 +0000 (0:00:01.247) 0:29:54.572 ******** 2026-04-09 05:41:04.570881 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:41:04.570892 | orchestrator | 2026-04-09 05:41:04.570903 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-09 05:41:04.570913 | orchestrator | Thursday 09 April 2026 05:40:53 +0000 (0:00:01.246) 0:29:55.819 ******** 2026-04-09 05:41:04.570924 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:04.570935 | orchestrator | 2026-04-09 05:41:04.570946 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-09 05:41:04.570957 | orchestrator | Thursday 09 April 2026 05:40:55 +0000 (0:00:01.259) 0:29:57.079 ******** 2026-04-09 05:41:04.570968 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:41:04.570979 | orchestrator | 2026-04-09 05:41:04.570990 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-09 05:41:04.571001 | orchestrator | Thursday 09 April 2026 05:40:56 +0000 (0:00:01.228) 0:29:58.308 ******** 2026-04-09 05:41:04.571012 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-09 05:41:04.571023 | orchestrator | 2026-04-09 05:41:04.571034 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 05:41:04.571045 | orchestrator | Thursday 09 April 2026 05:40:58 +0000 (0:00:02.000) 0:30:00.308 ******** 2026-04-09 05:41:04.571056 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:41:04.571067 | orchestrator | 2026-04-09 05:41:04.571079 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-09 05:41:04.571098 | orchestrator | Thursday 09 April 2026 05:40:59 +0000 (0:00:01.197) 0:30:01.506 ******** 2026-04-09 05:41:04.571118 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:04.571136 | orchestrator | 2026-04-09 05:41:04.571155 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-09 05:41:04.571166 | orchestrator | Thursday 09 April 2026 05:41:00 +0000 (0:00:01.151) 0:30:02.658 ******** 2026-04-09 05:41:04.571177 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:04.571188 | orchestrator | 2026-04-09 05:41:04.571198 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 05:41:04.571209 | orchestrator | Thursday 09 April 2026 05:41:02 +0000 (0:00:01.227) 0:30:03.886 ******** 2026-04-09 05:41:04.571219 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:04.571230 | orchestrator | 2026-04-09 05:41:04.571242 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-09 05:41:04.571261 | orchestrator | Thursday 09 April 2026 05:41:03 +0000 (0:00:01.248) 0:30:05.134 ******** 2026-04-09 05:41:04.571280 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:04.571292 | orchestrator | 2026-04-09 05:41:04.571303 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-09 05:41:04.571313 | orchestrator | Thursday 09 April 2026 05:41:04 +0000 (0:00:01.142) 0:30:06.276 ******** 2026-04-09 05:41:04.571324 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:04.571335 | orchestrator | 2026-04-09 05:41:04.571357 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-09 05:41:11.587235 | orchestrator | Thursday 09 April 2026 05:41:05 +0000 (0:00:01.198) 0:30:07.475 ******** 2026-04-09 05:41:11.587350 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:11.587377 | orchestrator | 2026-04-09 05:41:11.587398 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-09 05:41:11.587410 | orchestrator | Thursday 09 April 2026 05:41:06 +0000 (0:00:01.144) 0:30:08.619 ******** 2026-04-09 05:41:11.587422 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:11.587433 | orchestrator | 2026-04-09 05:41:11.587444 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-09 05:41:11.587479 | orchestrator | Thursday 09 April 2026 05:41:07 +0000 (0:00:01.185) 0:30:09.805 ******** 2026-04-09 05:41:11.587491 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:11.587501 | orchestrator | 2026-04-09 05:41:11.587512 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-09 05:41:11.587524 | orchestrator | Thursday 09 April 2026 05:41:09 +0000 (0:00:01.181) 0:30:10.986 ******** 2026-04-09 05:41:11.587534 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:11.587545 | orchestrator | 2026-04-09 05:41:11.587556 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-09 05:41:11.587581 | orchestrator | Thursday 09 April 2026 05:41:10 +0000 (0:00:01.105) 0:30:12.092 ******** 2026-04-09 05:41:11.587596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:41:11.587610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:41:11.587622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:41:11.587697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-14-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-09 05:41:11.587712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:41:11.587725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:41:11.587736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:41:11.587782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'dc1c8a18', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part16', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part14', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part15', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part1', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 05:41:11.587809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:41:11.587824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:41:11.587837 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:11.587850 | orchestrator | 2026-04-09 05:41:11.587863 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-09 05:41:11.587876 | orchestrator | Thursday 09 April 2026 05:41:11 +0000 (0:00:01.289) 0:30:13.381 ******** 2026-04-09 05:41:11.587889 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:41:11.587913 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:41:20.485336 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:41:20.485450 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-14-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:41:20.485472 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:41:20.485492 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:41:20.485510 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:41:20.485552 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'dc1c8a18', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part16', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part14', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part15', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part1', 'scsi-SQEMU_QEMU_HARDDISK_dc1c8a18-4ba7-4c32-b16d-97b935c649ca-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:41:20.485588 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:41:20.485599 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:41:20.485610 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:20.485622 | orchestrator | 2026-04-09 05:41:20.485699 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-09 05:41:20.485711 | orchestrator | Thursday 09 April 2026 05:41:12 +0000 (0:00:01.246) 0:30:14.628 ******** 2026-04-09 05:41:20.485721 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:41:20.485732 | orchestrator | 2026-04-09 05:41:20.485742 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-09 05:41:20.485752 | orchestrator | Thursday 09 April 2026 05:41:14 +0000 (0:00:01.561) 0:30:16.190 ******** 2026-04-09 05:41:20.485768 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:41:20.485778 | orchestrator | 2026-04-09 05:41:20.485788 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 05:41:20.485798 | orchestrator | Thursday 09 April 2026 05:41:15 +0000 (0:00:01.152) 0:30:17.343 ******** 2026-04-09 05:41:20.485808 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:41:20.485818 | orchestrator | 2026-04-09 05:41:20.485828 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 05:41:20.485838 | orchestrator | Thursday 09 April 2026 05:41:16 +0000 (0:00:01.481) 0:30:18.824 ******** 2026-04-09 05:41:20.485848 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:20.485858 | orchestrator | 2026-04-09 05:41:20.485868 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 05:41:20.485878 | orchestrator | Thursday 09 April 2026 05:41:18 +0000 (0:00:01.163) 0:30:19.988 ******** 2026-04-09 05:41:20.485887 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:20.485897 | orchestrator | 2026-04-09 05:41:20.485907 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 05:41:20.485917 | orchestrator | Thursday 09 April 2026 05:41:19 +0000 (0:00:01.221) 0:30:21.210 ******** 2026-04-09 05:41:20.485926 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:20.485936 | orchestrator | 2026-04-09 05:41:20.485946 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 05:41:20.485963 | orchestrator | Thursday 09 April 2026 05:41:20 +0000 (0:00:01.138) 0:30:22.349 ******** 2026-04-09 05:41:57.359273 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-09 05:41:57.359389 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-09 05:41:57.359404 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-09 05:41:57.359416 | orchestrator | 2026-04-09 05:41:57.359429 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 05:41:57.359442 | orchestrator | Thursday 09 April 2026 05:41:22 +0000 (0:00:02.033) 0:30:24.383 ******** 2026-04-09 05:41:57.359453 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-09 05:41:57.359465 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-09 05:41:57.359476 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-09 05:41:57.359487 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:57.359498 | orchestrator | 2026-04-09 05:41:57.359525 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-09 05:41:57.359537 | orchestrator | Thursday 09 April 2026 05:41:23 +0000 (0:00:01.159) 0:30:25.543 ******** 2026-04-09 05:41:57.359549 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:57.359560 | orchestrator | 2026-04-09 05:41:57.359571 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-09 05:41:57.359582 | orchestrator | Thursday 09 April 2026 05:41:24 +0000 (0:00:01.175) 0:30:26.719 ******** 2026-04-09 05:41:57.359593 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:41:57.359605 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:41:57.359616 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-09 05:41:57.359627 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 05:41:57.359701 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 05:41:57.359715 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 05:41:57.359726 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 05:41:57.359737 | orchestrator | 2026-04-09 05:41:57.359748 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-09 05:41:57.359760 | orchestrator | Thursday 09 April 2026 05:41:27 +0000 (0:00:02.279) 0:30:28.998 ******** 2026-04-09 05:41:57.359794 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:41:57.359806 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:41:57.359819 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-09 05:41:57.359832 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 05:41:57.359845 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 05:41:57.359857 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 05:41:57.359870 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 05:41:57.359884 | orchestrator | 2026-04-09 05:41:57.359896 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 05:41:57.359909 | orchestrator | Thursday 09 April 2026 05:41:29 +0000 (0:00:02.255) 0:30:31.254 ******** 2026-04-09 05:41:57.359920 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-04-09 05:41:57.359933 | orchestrator | 2026-04-09 05:41:57.359944 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 05:41:57.359955 | orchestrator | Thursday 09 April 2026 05:41:30 +0000 (0:00:01.211) 0:30:32.465 ******** 2026-04-09 05:41:57.359966 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-04-09 05:41:57.359977 | orchestrator | 2026-04-09 05:41:57.359988 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 05:41:57.359999 | orchestrator | Thursday 09 April 2026 05:41:31 +0000 (0:00:01.123) 0:30:33.589 ******** 2026-04-09 05:41:57.360010 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:41:57.360021 | orchestrator | 2026-04-09 05:41:57.360032 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 05:41:57.360043 | orchestrator | Thursday 09 April 2026 05:41:33 +0000 (0:00:01.603) 0:30:35.192 ******** 2026-04-09 05:41:57.360053 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:57.360064 | orchestrator | 2026-04-09 05:41:57.360075 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 05:41:57.360086 | orchestrator | Thursday 09 April 2026 05:41:34 +0000 (0:00:01.163) 0:30:36.356 ******** 2026-04-09 05:41:57.360097 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:57.360108 | orchestrator | 2026-04-09 05:41:57.360119 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 05:41:57.360130 | orchestrator | Thursday 09 April 2026 05:41:35 +0000 (0:00:01.159) 0:30:37.515 ******** 2026-04-09 05:41:57.360141 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:57.360151 | orchestrator | 2026-04-09 05:41:57.360162 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 05:41:57.360173 | orchestrator | Thursday 09 April 2026 05:41:36 +0000 (0:00:01.122) 0:30:38.638 ******** 2026-04-09 05:41:57.360184 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:41:57.360195 | orchestrator | 2026-04-09 05:41:57.360206 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 05:41:57.360216 | orchestrator | Thursday 09 April 2026 05:41:38 +0000 (0:00:01.568) 0:30:40.206 ******** 2026-04-09 05:41:57.360227 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:57.360238 | orchestrator | 2026-04-09 05:41:57.360249 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 05:41:57.360279 | orchestrator | Thursday 09 April 2026 05:41:39 +0000 (0:00:01.188) 0:30:41.394 ******** 2026-04-09 05:41:57.360291 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:57.360302 | orchestrator | 2026-04-09 05:41:57.360313 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 05:41:57.360324 | orchestrator | Thursday 09 April 2026 05:41:40 +0000 (0:00:01.130) 0:30:42.525 ******** 2026-04-09 05:41:57.360335 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:41:57.360346 | orchestrator | 2026-04-09 05:41:57.360364 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 05:41:57.360376 | orchestrator | Thursday 09 April 2026 05:41:42 +0000 (0:00:01.572) 0:30:44.097 ******** 2026-04-09 05:41:57.360387 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:41:57.360398 | orchestrator | 2026-04-09 05:41:57.360409 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 05:41:57.360425 | orchestrator | Thursday 09 April 2026 05:41:43 +0000 (0:00:01.545) 0:30:45.642 ******** 2026-04-09 05:41:57.360436 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:57.360447 | orchestrator | 2026-04-09 05:41:57.360458 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 05:41:57.360469 | orchestrator | Thursday 09 April 2026 05:41:44 +0000 (0:00:00.780) 0:30:46.423 ******** 2026-04-09 05:41:57.360480 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:41:57.360491 | orchestrator | 2026-04-09 05:41:57.360502 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 05:41:57.360513 | orchestrator | Thursday 09 April 2026 05:41:45 +0000 (0:00:00.800) 0:30:47.223 ******** 2026-04-09 05:41:57.360524 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:57.360535 | orchestrator | 2026-04-09 05:41:57.360546 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 05:41:57.360557 | orchestrator | Thursday 09 April 2026 05:41:46 +0000 (0:00:00.762) 0:30:47.986 ******** 2026-04-09 05:41:57.360567 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:57.360578 | orchestrator | 2026-04-09 05:41:57.360589 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 05:41:57.360600 | orchestrator | Thursday 09 April 2026 05:41:46 +0000 (0:00:00.764) 0:30:48.750 ******** 2026-04-09 05:41:57.360611 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:57.360622 | orchestrator | 2026-04-09 05:41:57.360633 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 05:41:57.360660 | orchestrator | Thursday 09 April 2026 05:41:47 +0000 (0:00:00.787) 0:30:49.538 ******** 2026-04-09 05:41:57.360672 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:57.360683 | orchestrator | 2026-04-09 05:41:57.360694 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 05:41:57.360705 | orchestrator | Thursday 09 April 2026 05:41:48 +0000 (0:00:00.840) 0:30:50.378 ******** 2026-04-09 05:41:57.360716 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:57.360727 | orchestrator | 2026-04-09 05:41:57.360738 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 05:41:57.360748 | orchestrator | Thursday 09 April 2026 05:41:49 +0000 (0:00:00.749) 0:30:51.128 ******** 2026-04-09 05:41:57.360759 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:41:57.360770 | orchestrator | 2026-04-09 05:41:57.360781 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 05:41:57.360792 | orchestrator | Thursday 09 April 2026 05:41:50 +0000 (0:00:00.778) 0:30:51.907 ******** 2026-04-09 05:41:57.360803 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:41:57.360815 | orchestrator | 2026-04-09 05:41:57.360826 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 05:41:57.360837 | orchestrator | Thursday 09 April 2026 05:41:50 +0000 (0:00:00.833) 0:30:52.740 ******** 2026-04-09 05:41:57.360848 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:41:57.360859 | orchestrator | 2026-04-09 05:41:57.360870 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-09 05:41:57.360881 | orchestrator | Thursday 09 April 2026 05:41:51 +0000 (0:00:00.805) 0:30:53.545 ******** 2026-04-09 05:41:57.360891 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:57.360902 | orchestrator | 2026-04-09 05:41:57.360914 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-09 05:41:57.360925 | orchestrator | Thursday 09 April 2026 05:41:52 +0000 (0:00:00.868) 0:30:54.414 ******** 2026-04-09 05:41:57.360935 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:57.360946 | orchestrator | 2026-04-09 05:41:57.360964 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-09 05:41:57.360975 | orchestrator | Thursday 09 April 2026 05:41:53 +0000 (0:00:00.795) 0:30:55.210 ******** 2026-04-09 05:41:57.360986 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:57.360997 | orchestrator | 2026-04-09 05:41:57.361008 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-09 05:41:57.361019 | orchestrator | Thursday 09 April 2026 05:41:54 +0000 (0:00:00.786) 0:30:55.996 ******** 2026-04-09 05:41:57.361030 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:57.361040 | orchestrator | 2026-04-09 05:41:57.361051 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-09 05:41:57.361062 | orchestrator | Thursday 09 April 2026 05:41:54 +0000 (0:00:00.774) 0:30:56.771 ******** 2026-04-09 05:41:57.361073 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:57.361084 | orchestrator | 2026-04-09 05:41:57.361095 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-09 05:41:57.361106 | orchestrator | Thursday 09 April 2026 05:41:55 +0000 (0:00:00.802) 0:30:57.573 ******** 2026-04-09 05:41:57.361117 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:57.361128 | orchestrator | 2026-04-09 05:41:57.361138 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-09 05:41:57.361149 | orchestrator | Thursday 09 April 2026 05:41:56 +0000 (0:00:00.769) 0:30:58.343 ******** 2026-04-09 05:41:57.361160 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:41:57.361171 | orchestrator | 2026-04-09 05:41:57.361182 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-09 05:41:57.361193 | orchestrator | Thursday 09 April 2026 05:41:57 +0000 (0:00:00.825) 0:30:59.169 ******** 2026-04-09 05:41:57.361210 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:42:41.797235 | orchestrator | 2026-04-09 05:42:41.797421 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-09 05:42:41.797452 | orchestrator | Thursday 09 April 2026 05:41:58 +0000 (0:00:00.801) 0:30:59.971 ******** 2026-04-09 05:42:41.797474 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:42:41.797496 | orchestrator | 2026-04-09 05:42:41.797516 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-09 05:42:41.797536 | orchestrator | Thursday 09 April 2026 05:41:58 +0000 (0:00:00.789) 0:31:00.760 ******** 2026-04-09 05:42:41.797555 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:42:41.797575 | orchestrator | 2026-04-09 05:42:41.797593 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-09 05:42:41.797611 | orchestrator | Thursday 09 April 2026 05:41:59 +0000 (0:00:00.793) 0:31:01.553 ******** 2026-04-09 05:42:41.797690 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:42:41.797715 | orchestrator | 2026-04-09 05:42:41.797733 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-09 05:42:41.797754 | orchestrator | Thursday 09 April 2026 05:42:00 +0000 (0:00:00.771) 0:31:02.325 ******** 2026-04-09 05:42:41.797774 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:42:41.797794 | orchestrator | 2026-04-09 05:42:41.797813 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-09 05:42:41.797834 | orchestrator | Thursday 09 April 2026 05:42:01 +0000 (0:00:00.757) 0:31:03.083 ******** 2026-04-09 05:42:41.797854 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:42:41.797875 | orchestrator | 2026-04-09 05:42:41.797898 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-09 05:42:41.797919 | orchestrator | Thursday 09 April 2026 05:42:02 +0000 (0:00:01.660) 0:31:04.744 ******** 2026-04-09 05:42:41.797940 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:42:41.797961 | orchestrator | 2026-04-09 05:42:41.797981 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-09 05:42:41.798002 | orchestrator | Thursday 09 April 2026 05:42:04 +0000 (0:00:02.123) 0:31:06.868 ******** 2026-04-09 05:42:41.798121 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-04-09 05:42:41.798186 | orchestrator | 2026-04-09 05:42:41.798208 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-09 05:42:41.798229 | orchestrator | Thursday 09 April 2026 05:42:06 +0000 (0:00:01.112) 0:31:07.980 ******** 2026-04-09 05:42:41.798249 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:42:41.798270 | orchestrator | 2026-04-09 05:42:41.798290 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-09 05:42:41.798311 | orchestrator | Thursday 09 April 2026 05:42:07 +0000 (0:00:01.141) 0:31:09.121 ******** 2026-04-09 05:42:41.798330 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:42:41.798349 | orchestrator | 2026-04-09 05:42:41.798368 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-09 05:42:41.798388 | orchestrator | Thursday 09 April 2026 05:42:08 +0000 (0:00:01.165) 0:31:10.287 ******** 2026-04-09 05:42:41.798407 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 05:42:41.798428 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 05:42:41.798449 | orchestrator | 2026-04-09 05:42:41.798469 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-09 05:42:41.798489 | orchestrator | Thursday 09 April 2026 05:42:10 +0000 (0:00:01.865) 0:31:12.152 ******** 2026-04-09 05:42:41.798509 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:42:41.798528 | orchestrator | 2026-04-09 05:42:41.798548 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-09 05:42:41.798569 | orchestrator | Thursday 09 April 2026 05:42:11 +0000 (0:00:01.467) 0:31:13.620 ******** 2026-04-09 05:42:41.798589 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:42:41.798608 | orchestrator | 2026-04-09 05:42:41.798628 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-09 05:42:41.798649 | orchestrator | Thursday 09 April 2026 05:42:12 +0000 (0:00:01.177) 0:31:14.797 ******** 2026-04-09 05:42:41.798697 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:42:41.798717 | orchestrator | 2026-04-09 05:42:41.798736 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-09 05:42:41.798756 | orchestrator | Thursday 09 April 2026 05:42:13 +0000 (0:00:00.783) 0:31:15.581 ******** 2026-04-09 05:42:41.798776 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:42:41.798796 | orchestrator | 2026-04-09 05:42:41.798815 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-09 05:42:41.798832 | orchestrator | Thursday 09 April 2026 05:42:14 +0000 (0:00:00.830) 0:31:16.412 ******** 2026-04-09 05:42:41.798852 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-04-09 05:42:41.798871 | orchestrator | 2026-04-09 05:42:41.798890 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-09 05:42:41.798909 | orchestrator | Thursday 09 April 2026 05:42:15 +0000 (0:00:01.097) 0:31:17.509 ******** 2026-04-09 05:42:41.798929 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:42:41.798948 | orchestrator | 2026-04-09 05:42:41.798968 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-09 05:42:41.798988 | orchestrator | Thursday 09 April 2026 05:42:17 +0000 (0:00:01.907) 0:31:19.417 ******** 2026-04-09 05:42:41.799009 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 05:42:41.799029 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 05:42:41.799050 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 05:42:41.799071 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:42:41.799091 | orchestrator | 2026-04-09 05:42:41.799111 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-09 05:42:41.799131 | orchestrator | Thursday 09 April 2026 05:42:18 +0000 (0:00:01.178) 0:31:20.595 ******** 2026-04-09 05:42:41.799184 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:42:41.799225 | orchestrator | 2026-04-09 05:42:41.799247 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-09 05:42:41.799267 | orchestrator | Thursday 09 April 2026 05:42:19 +0000 (0:00:01.123) 0:31:21.719 ******** 2026-04-09 05:42:41.799288 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:42:41.799308 | orchestrator | 2026-04-09 05:42:41.799328 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-09 05:42:41.799349 | orchestrator | Thursday 09 April 2026 05:42:20 +0000 (0:00:01.139) 0:31:22.858 ******** 2026-04-09 05:42:41.799369 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:42:41.799389 | orchestrator | 2026-04-09 05:42:41.799410 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-09 05:42:41.799443 | orchestrator | Thursday 09 April 2026 05:42:22 +0000 (0:00:01.204) 0:31:24.063 ******** 2026-04-09 05:42:41.799465 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:42:41.799486 | orchestrator | 2026-04-09 05:42:41.799507 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-09 05:42:41.799526 | orchestrator | Thursday 09 April 2026 05:42:23 +0000 (0:00:01.126) 0:31:25.189 ******** 2026-04-09 05:42:41.799547 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:42:41.799590 | orchestrator | 2026-04-09 05:42:41.799615 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-09 05:42:41.799635 | orchestrator | Thursday 09 April 2026 05:42:24 +0000 (0:00:00.814) 0:31:26.004 ******** 2026-04-09 05:42:41.799689 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:42:41.799713 | orchestrator | 2026-04-09 05:42:41.799733 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-09 05:42:41.799754 | orchestrator | Thursday 09 April 2026 05:42:26 +0000 (0:00:02.166) 0:31:28.171 ******** 2026-04-09 05:42:41.799774 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:42:41.799794 | orchestrator | 2026-04-09 05:42:41.799814 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-09 05:42:41.799835 | orchestrator | Thursday 09 April 2026 05:42:27 +0000 (0:00:00.778) 0:31:28.949 ******** 2026-04-09 05:42:41.799855 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-04-09 05:42:41.799876 | orchestrator | 2026-04-09 05:42:41.799896 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-09 05:42:41.799917 | orchestrator | Thursday 09 April 2026 05:42:28 +0000 (0:00:01.121) 0:31:30.071 ******** 2026-04-09 05:42:41.799937 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:42:41.799957 | orchestrator | 2026-04-09 05:42:41.799977 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-09 05:42:41.799997 | orchestrator | Thursday 09 April 2026 05:42:29 +0000 (0:00:01.159) 0:31:31.231 ******** 2026-04-09 05:42:41.800017 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:42:41.800038 | orchestrator | 2026-04-09 05:42:41.800059 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-09 05:42:41.800079 | orchestrator | Thursday 09 April 2026 05:42:30 +0000 (0:00:01.161) 0:31:32.393 ******** 2026-04-09 05:42:41.800100 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:42:41.800120 | orchestrator | 2026-04-09 05:42:41.800138 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-09 05:42:41.800157 | orchestrator | Thursday 09 April 2026 05:42:31 +0000 (0:00:01.150) 0:31:33.544 ******** 2026-04-09 05:42:41.800175 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:42:41.800194 | orchestrator | 2026-04-09 05:42:41.800211 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-09 05:42:41.800230 | orchestrator | Thursday 09 April 2026 05:42:32 +0000 (0:00:01.194) 0:31:34.738 ******** 2026-04-09 05:42:41.800249 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:42:41.800269 | orchestrator | 2026-04-09 05:42:41.800288 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-09 05:42:41.800308 | orchestrator | Thursday 09 April 2026 05:42:34 +0000 (0:00:01.154) 0:31:35.893 ******** 2026-04-09 05:42:41.800344 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:42:41.800365 | orchestrator | 2026-04-09 05:42:41.800386 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-09 05:42:41.800406 | orchestrator | Thursday 09 April 2026 05:42:35 +0000 (0:00:01.170) 0:31:37.063 ******** 2026-04-09 05:42:41.800426 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:42:41.800447 | orchestrator | 2026-04-09 05:42:41.800467 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-09 05:42:41.800487 | orchestrator | Thursday 09 April 2026 05:42:36 +0000 (0:00:01.162) 0:31:38.226 ******** 2026-04-09 05:42:41.800508 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:42:41.800528 | orchestrator | 2026-04-09 05:42:41.800548 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-09 05:42:41.800568 | orchestrator | Thursday 09 April 2026 05:42:37 +0000 (0:00:01.154) 0:31:39.380 ******** 2026-04-09 05:42:41.800589 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:42:41.800608 | orchestrator | 2026-04-09 05:42:41.800630 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-09 05:42:41.800650 | orchestrator | Thursday 09 April 2026 05:42:38 +0000 (0:00:00.833) 0:31:40.213 ******** 2026-04-09 05:42:41.800699 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-04-09 05:42:41.800719 | orchestrator | 2026-04-09 05:42:41.800739 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-09 05:42:41.800758 | orchestrator | Thursday 09 April 2026 05:42:39 +0000 (0:00:01.126) 0:31:41.339 ******** 2026-04-09 05:42:41.800778 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-04-09 05:42:41.800797 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-09 05:42:41.800815 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-09 05:42:41.800834 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-09 05:42:41.800853 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-09 05:42:41.800873 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-09 05:42:41.800909 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-09 05:43:20.828465 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-09 05:43:20.828596 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 05:43:20.828615 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 05:43:20.828629 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 05:43:20.828642 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 05:43:20.828654 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 05:43:20.828666 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 05:43:20.828727 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-04-09 05:43:20.828755 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-04-09 05:43:20.828767 | orchestrator | 2026-04-09 05:43:20.828779 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-09 05:43:20.828791 | orchestrator | Thursday 09 April 2026 05:42:45 +0000 (0:00:06.127) 0:31:47.467 ******** 2026-04-09 05:43:20.828802 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.828813 | orchestrator | 2026-04-09 05:43:20.828825 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-09 05:43:20.828836 | orchestrator | Thursday 09 April 2026 05:42:46 +0000 (0:00:00.800) 0:31:48.267 ******** 2026-04-09 05:43:20.828847 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.828859 | orchestrator | 2026-04-09 05:43:20.828870 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-09 05:43:20.828881 | orchestrator | Thursday 09 April 2026 05:42:47 +0000 (0:00:00.781) 0:31:49.049 ******** 2026-04-09 05:43:20.828893 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.828927 | orchestrator | 2026-04-09 05:43:20.828939 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-09 05:43:20.828952 | orchestrator | Thursday 09 April 2026 05:42:47 +0000 (0:00:00.757) 0:31:49.806 ******** 2026-04-09 05:43:20.828963 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.828975 | orchestrator | 2026-04-09 05:43:20.828985 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-09 05:43:20.828997 | orchestrator | Thursday 09 April 2026 05:42:48 +0000 (0:00:00.803) 0:31:50.610 ******** 2026-04-09 05:43:20.829008 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.829021 | orchestrator | 2026-04-09 05:43:20.829033 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-09 05:43:20.829045 | orchestrator | Thursday 09 April 2026 05:42:49 +0000 (0:00:00.807) 0:31:51.417 ******** 2026-04-09 05:43:20.829056 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.829066 | orchestrator | 2026-04-09 05:43:20.829077 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-09 05:43:20.829089 | orchestrator | Thursday 09 April 2026 05:42:50 +0000 (0:00:00.837) 0:31:52.254 ******** 2026-04-09 05:43:20.829100 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.829110 | orchestrator | 2026-04-09 05:43:20.829120 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-09 05:43:20.829131 | orchestrator | Thursday 09 April 2026 05:42:51 +0000 (0:00:00.845) 0:31:53.099 ******** 2026-04-09 05:43:20.829142 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.829152 | orchestrator | 2026-04-09 05:43:20.829164 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-09 05:43:20.829175 | orchestrator | Thursday 09 April 2026 05:42:52 +0000 (0:00:00.857) 0:31:53.957 ******** 2026-04-09 05:43:20.829185 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.829195 | orchestrator | 2026-04-09 05:43:20.829206 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-09 05:43:20.829216 | orchestrator | Thursday 09 April 2026 05:42:52 +0000 (0:00:00.805) 0:31:54.762 ******** 2026-04-09 05:43:20.829226 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.829237 | orchestrator | 2026-04-09 05:43:20.829247 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-09 05:43:20.829258 | orchestrator | Thursday 09 April 2026 05:42:53 +0000 (0:00:00.778) 0:31:55.541 ******** 2026-04-09 05:43:20.829268 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.829279 | orchestrator | 2026-04-09 05:43:20.829289 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-09 05:43:20.829299 | orchestrator | Thursday 09 April 2026 05:42:54 +0000 (0:00:00.779) 0:31:56.321 ******** 2026-04-09 05:43:20.829308 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.829318 | orchestrator | 2026-04-09 05:43:20.829329 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-09 05:43:20.829340 | orchestrator | Thursday 09 April 2026 05:42:55 +0000 (0:00:00.799) 0:31:57.120 ******** 2026-04-09 05:43:20.829350 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.829359 | orchestrator | 2026-04-09 05:43:20.829368 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-09 05:43:20.829378 | orchestrator | Thursday 09 April 2026 05:42:56 +0000 (0:00:00.857) 0:31:57.977 ******** 2026-04-09 05:43:20.829388 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.829399 | orchestrator | 2026-04-09 05:43:20.829409 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-09 05:43:20.829420 | orchestrator | Thursday 09 April 2026 05:42:56 +0000 (0:00:00.799) 0:31:58.777 ******** 2026-04-09 05:43:20.829432 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.829442 | orchestrator | 2026-04-09 05:43:20.829451 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-09 05:43:20.829461 | orchestrator | Thursday 09 April 2026 05:42:57 +0000 (0:00:00.902) 0:31:59.679 ******** 2026-04-09 05:43:20.829483 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.829493 | orchestrator | 2026-04-09 05:43:20.829503 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-09 05:43:20.829513 | orchestrator | Thursday 09 April 2026 05:42:58 +0000 (0:00:00.778) 0:32:00.457 ******** 2026-04-09 05:43:20.829544 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.829556 | orchestrator | 2026-04-09 05:43:20.829567 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 05:43:20.829579 | orchestrator | Thursday 09 April 2026 05:42:59 +0000 (0:00:00.844) 0:32:01.301 ******** 2026-04-09 05:43:20.829588 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.829598 | orchestrator | 2026-04-09 05:43:20.829609 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 05:43:20.829620 | orchestrator | Thursday 09 April 2026 05:43:00 +0000 (0:00:00.777) 0:32:02.079 ******** 2026-04-09 05:43:20.829629 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.829639 | orchestrator | 2026-04-09 05:43:20.829657 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 05:43:20.829690 | orchestrator | Thursday 09 April 2026 05:43:01 +0000 (0:00:00.819) 0:32:02.898 ******** 2026-04-09 05:43:20.829701 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.829710 | orchestrator | 2026-04-09 05:43:20.829719 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 05:43:20.829729 | orchestrator | Thursday 09 April 2026 05:43:01 +0000 (0:00:00.804) 0:32:03.702 ******** 2026-04-09 05:43:20.829739 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.829749 | orchestrator | 2026-04-09 05:43:20.829760 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 05:43:20.829770 | orchestrator | Thursday 09 April 2026 05:43:02 +0000 (0:00:00.783) 0:32:04.485 ******** 2026-04-09 05:43:20.829780 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-09 05:43:20.829791 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-09 05:43:20.829801 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-09 05:43:20.829812 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.829823 | orchestrator | 2026-04-09 05:43:20.829833 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 05:43:20.829843 | orchestrator | Thursday 09 April 2026 05:43:03 +0000 (0:00:01.081) 0:32:05.567 ******** 2026-04-09 05:43:20.829854 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-09 05:43:20.829865 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-09 05:43:20.829875 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-09 05:43:20.829886 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.829897 | orchestrator | 2026-04-09 05:43:20.829907 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 05:43:20.829918 | orchestrator | Thursday 09 April 2026 05:43:04 +0000 (0:00:01.052) 0:32:06.620 ******** 2026-04-09 05:43:20.829928 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-09 05:43:20.829938 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-09 05:43:20.829949 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-09 05:43:20.829960 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.829970 | orchestrator | 2026-04-09 05:43:20.829981 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 05:43:20.829991 | orchestrator | Thursday 09 April 2026 05:43:05 +0000 (0:00:01.118) 0:32:07.739 ******** 2026-04-09 05:43:20.830001 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.830011 | orchestrator | 2026-04-09 05:43:20.830074 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 05:43:20.830085 | orchestrator | Thursday 09 April 2026 05:43:06 +0000 (0:00:00.766) 0:32:08.506 ******** 2026-04-09 05:43:20.830107 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-09 05:43:20.830119 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.830129 | orchestrator | 2026-04-09 05:43:20.830139 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-09 05:43:20.830150 | orchestrator | Thursday 09 April 2026 05:43:07 +0000 (0:00:00.924) 0:32:09.430 ******** 2026-04-09 05:43:20.830161 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:43:20.830182 | orchestrator | 2026-04-09 05:43:20.830192 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-09 05:43:20.830204 | orchestrator | Thursday 09 April 2026 05:43:08 +0000 (0:00:01.430) 0:32:10.861 ******** 2026-04-09 05:43:20.830214 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:43:20.830226 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:43:20.830237 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-09 05:43:20.830248 | orchestrator | 2026-04-09 05:43:20.830260 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-09 05:43:20.830270 | orchestrator | Thursday 09 April 2026 05:43:10 +0000 (0:00:01.721) 0:32:12.583 ******** 2026-04-09 05:43:20.830280 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-2 2026-04-09 05:43:20.830291 | orchestrator | 2026-04-09 05:43:20.830301 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-09 05:43:20.830312 | orchestrator | Thursday 09 April 2026 05:43:11 +0000 (0:00:01.196) 0:32:13.780 ******** 2026-04-09 05:43:20.830322 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:43:20.830334 | orchestrator | 2026-04-09 05:43:20.830346 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-09 05:43:20.830358 | orchestrator | Thursday 09 April 2026 05:43:13 +0000 (0:00:01.482) 0:32:15.262 ******** 2026-04-09 05:43:20.830370 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:43:20.830382 | orchestrator | 2026-04-09 05:43:20.830394 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-09 05:43:20.830407 | orchestrator | Thursday 09 April 2026 05:43:14 +0000 (0:00:01.145) 0:32:16.408 ******** 2026-04-09 05:43:20.830419 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 05:43:20.830431 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 05:43:20.830454 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 05:44:07.433007 | orchestrator | ok: [testbed-node-2 -> {{ groups[mon_group_name][0] }}] 2026-04-09 05:44:07.433127 | orchestrator | 2026-04-09 05:44:07.433144 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-09 05:44:07.433155 | orchestrator | Thursday 09 April 2026 05:43:21 +0000 (0:00:07.433) 0:32:23.841 ******** 2026-04-09 05:44:07.433167 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:44:07.433180 | orchestrator | 2026-04-09 05:44:07.433191 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-09 05:44:07.433203 | orchestrator | Thursday 09 April 2026 05:43:23 +0000 (0:00:01.183) 0:32:25.025 ******** 2026-04-09 05:44:07.433214 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-09 05:44:07.433241 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-09 05:44:07.433253 | orchestrator | 2026-04-09 05:44:07.433264 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-09 05:44:07.433276 | orchestrator | Thursday 09 April 2026 05:43:26 +0000 (0:00:03.224) 0:32:28.250 ******** 2026-04-09 05:44:07.433287 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-09 05:44:07.433298 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-09 05:44:07.433309 | orchestrator | 2026-04-09 05:44:07.433321 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-09 05:44:07.433331 | orchestrator | Thursday 09 April 2026 05:43:28 +0000 (0:00:01.938) 0:32:30.188 ******** 2026-04-09 05:44:07.433365 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:44:07.433377 | orchestrator | 2026-04-09 05:44:07.433388 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-09 05:44:07.433399 | orchestrator | Thursday 09 April 2026 05:43:29 +0000 (0:00:01.511) 0:32:31.700 ******** 2026-04-09 05:44:07.433410 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:44:07.433421 | orchestrator | 2026-04-09 05:44:07.433432 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-09 05:44:07.433443 | orchestrator | Thursday 09 April 2026 05:43:30 +0000 (0:00:00.776) 0:32:32.476 ******** 2026-04-09 05:44:07.433453 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:44:07.433464 | orchestrator | 2026-04-09 05:44:07.433475 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-09 05:44:07.433485 | orchestrator | Thursday 09 April 2026 05:43:31 +0000 (0:00:00.765) 0:32:33.241 ******** 2026-04-09 05:44:07.433496 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-2 2026-04-09 05:44:07.433508 | orchestrator | 2026-04-09 05:44:07.433519 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-09 05:44:07.433529 | orchestrator | Thursday 09 April 2026 05:43:32 +0000 (0:00:01.285) 0:32:34.527 ******** 2026-04-09 05:44:07.433543 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:44:07.433556 | orchestrator | 2026-04-09 05:44:07.433570 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-09 05:44:07.433583 | orchestrator | Thursday 09 April 2026 05:43:33 +0000 (0:00:01.141) 0:32:35.668 ******** 2026-04-09 05:44:07.433596 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:44:07.433608 | orchestrator | 2026-04-09 05:44:07.433622 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-09 05:44:07.433635 | orchestrator | Thursday 09 April 2026 05:43:34 +0000 (0:00:01.155) 0:32:36.824 ******** 2026-04-09 05:44:07.433648 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-2 2026-04-09 05:44:07.433661 | orchestrator | 2026-04-09 05:44:07.433675 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-09 05:44:07.433716 | orchestrator | Thursday 09 April 2026 05:43:36 +0000 (0:00:01.101) 0:32:37.925 ******** 2026-04-09 05:44:07.433732 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:44:07.433745 | orchestrator | 2026-04-09 05:44:07.433758 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-09 05:44:07.433771 | orchestrator | Thursday 09 April 2026 05:43:38 +0000 (0:00:02.031) 0:32:39.957 ******** 2026-04-09 05:44:07.433784 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:44:07.433797 | orchestrator | 2026-04-09 05:44:07.433810 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-09 05:44:07.433824 | orchestrator | Thursday 09 April 2026 05:43:40 +0000 (0:00:01.956) 0:32:41.913 ******** 2026-04-09 05:44:07.433837 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:44:07.433850 | orchestrator | 2026-04-09 05:44:07.433863 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-09 05:44:07.433874 | orchestrator | Thursday 09 April 2026 05:43:42 +0000 (0:00:02.377) 0:32:44.290 ******** 2026-04-09 05:44:07.433885 | orchestrator | changed: [testbed-node-2] 2026-04-09 05:44:07.433896 | orchestrator | 2026-04-09 05:44:07.433907 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-09 05:44:07.433918 | orchestrator | Thursday 09 April 2026 05:43:45 +0000 (0:00:03.441) 0:32:47.732 ******** 2026-04-09 05:44:07.433929 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-09 05:44:07.433939 | orchestrator | 2026-04-09 05:44:07.433950 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-09 05:44:07.433961 | orchestrator | Thursday 09 April 2026 05:43:47 +0000 (0:00:01.512) 0:32:49.245 ******** 2026-04-09 05:44:07.433973 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-09 05:44:07.433983 | orchestrator | 2026-04-09 05:44:07.433994 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-09 05:44:07.434076 | orchestrator | Thursday 09 April 2026 05:43:49 +0000 (0:00:02.377) 0:32:51.622 ******** 2026-04-09 05:44:07.434090 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-09 05:44:07.434101 | orchestrator | 2026-04-09 05:44:07.434112 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-09 05:44:07.434123 | orchestrator | Thursday 09 April 2026 05:43:52 +0000 (0:00:02.381) 0:32:54.003 ******** 2026-04-09 05:44:07.434133 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:44:07.434144 | orchestrator | 2026-04-09 05:44:07.434156 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-09 05:44:07.434186 | orchestrator | Thursday 09 April 2026 05:43:53 +0000 (0:00:01.397) 0:32:55.401 ******** 2026-04-09 05:44:07.434198 | orchestrator | ok: [testbed-node-2] 2026-04-09 05:44:07.434208 | orchestrator | 2026-04-09 05:44:07.434220 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-09 05:44:07.434231 | orchestrator | Thursday 09 April 2026 05:43:54 +0000 (0:00:01.157) 0:32:56.559 ******** 2026-04-09 05:44:07.434242 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-04-09 05:44:07.434253 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-04-09 05:44:07.434264 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:44:07.434276 | orchestrator | 2026-04-09 05:44:07.434287 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-09 05:44:07.434304 | orchestrator | Thursday 09 April 2026 05:43:56 +0000 (0:00:01.371) 0:32:57.930 ******** 2026-04-09 05:44:07.434315 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-09 05:44:07.434326 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-04-09 05:44:07.434337 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-04-09 05:44:07.434348 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-09 05:44:07.434359 | orchestrator | skipping: [testbed-node-2] 2026-04-09 05:44:07.434370 | orchestrator | 2026-04-09 05:44:07.434381 | orchestrator | PLAY [Set osd flags] *********************************************************** 2026-04-09 05:44:07.434392 | orchestrator | 2026-04-09 05:44:07.434403 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 05:44:07.434414 | orchestrator | Thursday 09 April 2026 05:43:58 +0000 (0:00:01.963) 0:32:59.894 ******** 2026-04-09 05:44:07.434425 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:44:07.434436 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:44:07.434447 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:44:07.434458 | orchestrator | 2026-04-09 05:44:07.434469 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 05:44:07.434480 | orchestrator | Thursday 09 April 2026 05:43:59 +0000 (0:00:01.686) 0:33:01.580 ******** 2026-04-09 05:44:07.434491 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:44:07.434502 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:44:07.434513 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:44:07.434524 | orchestrator | 2026-04-09 05:44:07.434535 | orchestrator | TASK [Get pool list] *********************************************************** 2026-04-09 05:44:07.434546 | orchestrator | Thursday 09 April 2026 05:44:01 +0000 (0:00:01.688) 0:33:03.269 ******** 2026-04-09 05:44:07.434557 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 05:44:07.434568 | orchestrator | 2026-04-09 05:44:07.434579 | orchestrator | TASK [Get balancer module status] ********************************************** 2026-04-09 05:44:07.434590 | orchestrator | Thursday 09 April 2026 05:44:04 +0000 (0:00:02.902) 0:33:06.172 ******** 2026-04-09 05:44:07.434601 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 05:44:07.434612 | orchestrator | 2026-04-09 05:44:07.434623 | orchestrator | TASK [Set_fact pools_pgautoscaler_mode] **************************************** 2026-04-09 05:44:07.434634 | orchestrator | Thursday 09 April 2026 05:44:07 +0000 (0:00:02.822) 0:33:08.994 ******** 2026-04-09 05:44:07.434651 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 1, 'pool_name': '.mgr', 'create_time': '2026-04-09T03:01:38.765589+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '20', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6.059999942779541, 'score_stable': 6.059999942779541, 'optimal_score': 0.33000001311302185, 'raw_score_acting': 2, 'raw_score_stable': 2, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-09 05:44:07.434729 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 2, 'pool_name': 'cephfs_data', 'create_time': '2026-04-09T03:02:55.054330+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '32', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'cephfs': {'data': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-09 05:44:07.878357 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 3, 'pool_name': 'cephfs_metadata', 'create_time': '2026-04-09T03:02:58.868963+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 16, 'pg_placement_num': 16, 'pg_placement_num_target': 16, 'pg_num_target': 16, 'pg_num_pending': 16, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '66', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_autoscale_bias': 4, 'pg_num_min': 16, 'recovery_priority': 5}, 'application_metadata': {'cephfs': {'metadata': 'cephfs'}}, 'read_balance': {'score_acting': 2.25, 'score_stable': 2.25, 'optimal_score': 1, 'raw_score_acting': 2.25, 'raw_score_stable': 2.25, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-09 05:44:07.878519 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 4, 'pool_name': 'default.rgw.buckets.data', 'create_time': '2026-04-09T03:04:00.243956+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '79', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '73', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-09 05:44:07.878540 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 5, 'pool_name': 'default.rgw.buckets.index', 'create_time': '2026-04-09T03:04:06.674724+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '79', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '73', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-09 05:44:07.878570 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 6, 'pool_name': 'default.rgw.control', 'create_time': '2026-04-09T03:04:12.263033+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '79', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '75', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 2.059999942779541, 'score_stable': 2.059999942779541, 'optimal_score': 1, 'raw_score_acting': 2.059999942779541, 'raw_score_stable': 2.059999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-09 05:44:07.878594 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 7, 'pool_name': 'default.rgw.log', 'create_time': '2026-04-09T03:04:18.717207+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '187', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '75', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-09 05:44:08.635410 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 8, 'pool_name': 'default.rgw.meta', 'create_time': '2026-04-09T03:04:24.796949+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '79', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '77', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-09 05:44:08.635517 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 9, 'pool_name': '.rgw.root', 'create_time': '2026-04-09T03:04:36.951056+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '123', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '116', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-09 05:44:08.635581 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 10, 'pool_name': 'backups', 'create_time': '2026-04-09T03:05:26.035058+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '104', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 104, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-09 05:44:08.635613 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 11, 'pool_name': 'volumes', 'create_time': '2026-04-09T03:05:35.053704+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '113', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 113, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-09 05:44:08.635641 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 12, 'pool_name': 'images', 'create_time': '2026-04-09T03:05:43.931589+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '197', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 6, 'snap_epoch': 197, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-09 05:45:44.814413 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 13, 'pool_name': 'metrics', 'create_time': '2026-04-09T03:05:52.961944+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '130', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 130, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-09 05:45:44.814560 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 14, 'pool_name': 'vms', 'create_time': '2026-04-09T03:06:01.062594+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '138', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 138, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-09 05:45:44.814603 | orchestrator | 2026-04-09 05:45:44.814617 | orchestrator | TASK [Disable balancer] ******************************************************** 2026-04-09 05:45:44.814629 | orchestrator | Thursday 09 April 2026 05:44:09 +0000 (0:00:02.828) 0:33:11.822 ******** 2026-04-09 05:45:44.814639 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 05:45:44.814648 | orchestrator | 2026-04-09 05:45:44.814658 | orchestrator | TASK [Disable pg autoscale on pools] ******************************************* 2026-04-09 05:45:44.814668 | orchestrator | Thursday 09 April 2026 05:44:13 +0000 (0:00:03.161) 0:33:14.984 ******** 2026-04-09 05:45:44.814677 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-04-09 05:45:44.814689 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-04-09 05:45:44.814699 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-04-09 05:45:44.814709 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-04-09 05:45:44.814720 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-04-09 05:45:44.814757 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-04-09 05:45:44.814767 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-04-09 05:45:44.814777 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-04-09 05:45:44.814787 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-04-09 05:45:44.814796 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-04-09 05:45:44.814806 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-04-09 05:45:44.814816 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-04-09 05:45:44.814825 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-04-09 05:45:44.814835 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-04-09 05:45:44.814844 | orchestrator | 2026-04-09 05:45:44.814854 | orchestrator | TASK [Set osd flags] *********************************************************** 2026-04-09 05:45:44.814863 | orchestrator | Thursday 09 April 2026 05:45:28 +0000 (0:01:15.512) 0:34:30.497 ******** 2026-04-09 05:45:44.814873 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-04-09 05:45:44.814882 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-04-09 05:45:44.814892 | orchestrator | 2026-04-09 05:45:44.814902 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-04-09 05:45:44.814911 | orchestrator | 2026-04-09 05:45:44.814921 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 05:45:44.814933 | orchestrator | Thursday 09 April 2026 05:45:34 +0000 (0:00:06.274) 0:34:36.771 ******** 2026-04-09 05:45:44.814945 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-04-09 05:45:44.814956 | orchestrator | 2026-04-09 05:45:44.814967 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-09 05:45:44.814982 | orchestrator | Thursday 09 April 2026 05:45:36 +0000 (0:00:01.160) 0:34:37.932 ******** 2026-04-09 05:45:44.814995 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:45:44.815008 | orchestrator | 2026-04-09 05:45:44.815020 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-09 05:45:44.815031 | orchestrator | Thursday 09 April 2026 05:45:37 +0000 (0:00:01.515) 0:34:39.448 ******** 2026-04-09 05:45:44.815050 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:45:44.815062 | orchestrator | 2026-04-09 05:45:44.815073 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 05:45:44.815085 | orchestrator | Thursday 09 April 2026 05:45:38 +0000 (0:00:01.220) 0:34:40.668 ******** 2026-04-09 05:45:44.815097 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:45:44.815108 | orchestrator | 2026-04-09 05:45:44.815120 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 05:45:44.815130 | orchestrator | Thursday 09 April 2026 05:45:40 +0000 (0:00:01.443) 0:34:42.111 ******** 2026-04-09 05:45:44.815139 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:45:44.815149 | orchestrator | 2026-04-09 05:45:44.815158 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-09 05:45:44.815168 | orchestrator | Thursday 09 April 2026 05:45:41 +0000 (0:00:01.143) 0:34:43.255 ******** 2026-04-09 05:45:44.815178 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:45:44.815187 | orchestrator | 2026-04-09 05:45:44.815197 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-09 05:45:44.815207 | orchestrator | Thursday 09 April 2026 05:45:42 +0000 (0:00:01.147) 0:34:44.402 ******** 2026-04-09 05:45:44.815216 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:45:44.815226 | orchestrator | 2026-04-09 05:45:44.815236 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-09 05:45:44.815245 | orchestrator | Thursday 09 April 2026 05:45:43 +0000 (0:00:01.151) 0:34:45.554 ******** 2026-04-09 05:45:44.815255 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:45:44.815265 | orchestrator | 2026-04-09 05:45:44.815275 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-09 05:45:44.815292 | orchestrator | Thursday 09 April 2026 05:45:44 +0000 (0:00:01.119) 0:34:46.673 ******** 2026-04-09 05:46:09.347698 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:46:09.347874 | orchestrator | 2026-04-09 05:46:09.347894 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-09 05:46:09.347908 | orchestrator | Thursday 09 April 2026 05:45:45 +0000 (0:00:01.140) 0:34:47.814 ******** 2026-04-09 05:46:09.347920 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:46:09.347932 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:46:09.347943 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:46:09.347955 | orchestrator | 2026-04-09 05:46:09.347967 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-09 05:46:09.347978 | orchestrator | Thursday 09 April 2026 05:45:47 +0000 (0:00:01.996) 0:34:49.810 ******** 2026-04-09 05:46:09.347989 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:46:09.348000 | orchestrator | 2026-04-09 05:46:09.348011 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-09 05:46:09.348022 | orchestrator | Thursday 09 April 2026 05:45:49 +0000 (0:00:01.233) 0:34:51.044 ******** 2026-04-09 05:46:09.348033 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:46:09.348044 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:46:09.348054 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:46:09.348065 | orchestrator | 2026-04-09 05:46:09.348076 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-09 05:46:09.348087 | orchestrator | Thursday 09 April 2026 05:45:52 +0000 (0:00:03.278) 0:34:54.323 ******** 2026-04-09 05:46:09.348098 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-09 05:46:09.348110 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-09 05:46:09.348121 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-09 05:46:09.348132 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:46:09.348173 | orchestrator | 2026-04-09 05:46:09.348185 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-09 05:46:09.348196 | orchestrator | Thursday 09 April 2026 05:45:54 +0000 (0:00:01.782) 0:34:56.106 ******** 2026-04-09 05:46:09.348209 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 05:46:09.348224 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 05:46:09.348236 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 05:46:09.348247 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:46:09.348258 | orchestrator | 2026-04-09 05:46:09.348269 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-09 05:46:09.348281 | orchestrator | Thursday 09 April 2026 05:45:56 +0000 (0:00:02.048) 0:34:58.154 ******** 2026-04-09 05:46:09.348309 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:46:09.348325 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:46:09.348337 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:46:09.348348 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:46:09.348360 | orchestrator | 2026-04-09 05:46:09.348371 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-09 05:46:09.348382 | orchestrator | Thursday 09 April 2026 05:45:57 +0000 (0:00:01.274) 0:34:59.429 ******** 2026-04-09 05:46:09.348415 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '69d38aa54653', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-09 05:45:49.722134', 'end': '2026-04-09 05:45:49.775108', 'delta': '0:00:00.052974', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['69d38aa54653'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-09 05:46:09.348430 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '3e7867c40460', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-09 05:45:50.678757', 'end': '2026-04-09 05:45:50.727921', 'delta': '0:00:00.049164', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3e7867c40460'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-09 05:46:09.348450 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '5ed6058fb18c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-09 05:45:51.235850', 'end': '2026-04-09 05:45:51.278933', 'delta': '0:00:00.043083', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5ed6058fb18c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-09 05:46:09.348462 | orchestrator | 2026-04-09 05:46:09.348474 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-09 05:46:09.348485 | orchestrator | Thursday 09 April 2026 05:45:58 +0000 (0:00:01.234) 0:35:00.663 ******** 2026-04-09 05:46:09.348496 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:46:09.348507 | orchestrator | 2026-04-09 05:46:09.348517 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-09 05:46:09.348528 | orchestrator | Thursday 09 April 2026 05:46:00 +0000 (0:00:01.342) 0:35:02.006 ******** 2026-04-09 05:46:09.348540 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:46:09.348550 | orchestrator | 2026-04-09 05:46:09.348561 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-09 05:46:09.348578 | orchestrator | Thursday 09 April 2026 05:46:01 +0000 (0:00:01.234) 0:35:03.241 ******** 2026-04-09 05:46:09.348589 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:46:09.348601 | orchestrator | 2026-04-09 05:46:09.348611 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-09 05:46:09.348622 | orchestrator | Thursday 09 April 2026 05:46:02 +0000 (0:00:01.197) 0:35:04.438 ******** 2026-04-09 05:46:09.348634 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 05:46:09.348645 | orchestrator | 2026-04-09 05:46:09.348656 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 05:46:09.348667 | orchestrator | Thursday 09 April 2026 05:46:04 +0000 (0:00:01.991) 0:35:06.430 ******** 2026-04-09 05:46:09.348678 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:46:09.348689 | orchestrator | 2026-04-09 05:46:09.348700 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-09 05:46:09.348710 | orchestrator | Thursday 09 April 2026 05:46:05 +0000 (0:00:01.158) 0:35:07.588 ******** 2026-04-09 05:46:09.348721 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:46:09.348777 | orchestrator | 2026-04-09 05:46:09.348789 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-09 05:46:09.348800 | orchestrator | Thursday 09 April 2026 05:46:06 +0000 (0:00:01.145) 0:35:08.734 ******** 2026-04-09 05:46:09.348811 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:46:09.348822 | orchestrator | 2026-04-09 05:46:09.348833 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 05:46:09.348844 | orchestrator | Thursday 09 April 2026 05:46:08 +0000 (0:00:01.216) 0:35:09.950 ******** 2026-04-09 05:46:09.348855 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:46:09.348866 | orchestrator | 2026-04-09 05:46:09.348877 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-09 05:46:09.348887 | orchestrator | Thursday 09 April 2026 05:46:09 +0000 (0:00:01.115) 0:35:11.065 ******** 2026-04-09 05:46:09.348906 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:46:09.348918 | orchestrator | 2026-04-09 05:46:09.348937 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-09 05:46:16.518971 | orchestrator | Thursday 09 April 2026 05:46:10 +0000 (0:00:01.144) 0:35:12.210 ******** 2026-04-09 05:46:16.519081 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:46:16.519097 | orchestrator | 2026-04-09 05:46:16.519109 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-09 05:46:16.519121 | orchestrator | Thursday 09 April 2026 05:46:11 +0000 (0:00:01.154) 0:35:13.365 ******** 2026-04-09 05:46:16.519133 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:46:16.519145 | orchestrator | 2026-04-09 05:46:16.519156 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-09 05:46:16.519167 | orchestrator | Thursday 09 April 2026 05:46:12 +0000 (0:00:01.152) 0:35:14.517 ******** 2026-04-09 05:46:16.519179 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:46:16.519191 | orchestrator | 2026-04-09 05:46:16.519202 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-09 05:46:16.519213 | orchestrator | Thursday 09 April 2026 05:46:13 +0000 (0:00:01.287) 0:35:15.805 ******** 2026-04-09 05:46:16.519224 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:46:16.519235 | orchestrator | 2026-04-09 05:46:16.519246 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-09 05:46:16.519258 | orchestrator | Thursday 09 April 2026 05:46:15 +0000 (0:00:01.195) 0:35:17.000 ******** 2026-04-09 05:46:16.519268 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:46:16.519280 | orchestrator | 2026-04-09 05:46:16.519290 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-09 05:46:16.519301 | orchestrator | Thursday 09 April 2026 05:46:16 +0000 (0:00:01.175) 0:35:18.176 ******** 2026-04-09 05:46:16.519315 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:46:16.519332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1db77c01--2d77--5e1e--8d0a--4e535706b141-osd--block--1db77c01--2d77--5e1e--8d0a--4e535706b141', 'dm-uuid-LVM-Vd5rxgKUs73TU4Fbsf9r5IJx2JqOBc0dUhLn892OVFegRXfoAocTUnq1hUBwAqbQ'], 'uuids': ['9adc5058-59dc-41de-adf6-afc54c646e02'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '162ed735', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['UhLn89-2OVF-egRX-foAo-cTUn-q1hU-BwAqbQ']}})  2026-04-09 05:46:16.519362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be', 'scsi-SQEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5d5b0f3e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 05:46:16.519375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-e4gHGq-azk6-pcuI-7Nw2-ZeJR-MqdR-RoIdn8', 'scsi-0QEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761', 'scsi-SQEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f5862b72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2f59a7c8--f88e--51a3--9620--37640e0ff9b5-osd--block--2f59a7c8--f88e--51a3--9620--37640e0ff9b5']}})  2026-04-09 05:46:16.519407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:46:16.519438 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:46:16.519451 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-11-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-09 05:46:16.519463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:46:16.519474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-YYkPY1-YV0V-Afba-Bksc-BIap-C0b1-XJToPw', 'dm-uuid-CRYPT-LUKS2-34a00b1693eb41a48240b70c6fb1290d-YYkPY1-YV0V-Afba-Bksc-BIap-C0b1-XJToPw'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-09 05:46:16.519486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:46:16.519503 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2f59a7c8--f88e--51a3--9620--37640e0ff9b5-osd--block--2f59a7c8--f88e--51a3--9620--37640e0ff9b5', 'dm-uuid-LVM-oNDzr1Rndp1i5vNhITRGHxSNPadq9yP2YYkPY1YV0VAfbaBkscBIapC0b1XJToPw'], 'uuids': ['34a00b16-93eb-41a4-8240-b70c6fb1290d'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f5862b72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['YYkPY1-YV0V-Afba-Bksc-BIap-C0b1-XJToPw']}})  2026-04-09 05:46:16.519525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0XRV4m-9HLY-z9SL-9jLS-42la-BMLC-ZGW5lb', 'scsi-0QEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad', 'scsi-SQEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '162ed735', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1db77c01--2d77--5e1e--8d0a--4e535706b141-osd--block--1db77c01--2d77--5e1e--8d0a--4e535706b141']}})  2026-04-09 05:46:16.519549 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:46:17.886768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0bd1f840', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part16', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part14', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part15', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part1', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 05:46:17.886883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:46:17.886921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:46:17.886934 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-UhLn89-2OVF-egRX-foAo-cTUn-q1hU-BwAqbQ', 'dm-uuid-CRYPT-LUKS2-9adc505859dc41deadf6afc54c646e02-UhLn89-2OVF-egRX-foAo-cTUn-q1hU-BwAqbQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-09 05:46:17.886946 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:46:17.886958 | orchestrator | 2026-04-09 05:46:17.886969 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-09 05:46:17.886980 | orchestrator | Thursday 09 April 2026 05:46:17 +0000 (0:00:01.396) 0:35:19.573 ******** 2026-04-09 05:46:17.887008 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:46:17.887022 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1db77c01--2d77--5e1e--8d0a--4e535706b141-osd--block--1db77c01--2d77--5e1e--8d0a--4e535706b141', 'dm-uuid-LVM-Vd5rxgKUs73TU4Fbsf9r5IJx2JqOBc0dUhLn892OVFegRXfoAocTUnq1hUBwAqbQ'], 'uuids': ['9adc5058-59dc-41de-adf6-afc54c646e02'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '162ed735', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['UhLn89-2OVF-egRX-foAo-cTUn-q1hU-BwAqbQ']}}, 'ansible_loop_var': 'item'})  2026-04-09 05:46:17.887033 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be', 'scsi-SQEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5d5b0f3e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:46:17.887050 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-e4gHGq-azk6-pcuI-7Nw2-ZeJR-MqdR-RoIdn8', 'scsi-0QEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761', 'scsi-SQEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f5862b72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2f59a7c8--f88e--51a3--9620--37640e0ff9b5-osd--block--2f59a7c8--f88e--51a3--9620--37640e0ff9b5']}}, 'ansible_loop_var': 'item'})  2026-04-09 05:46:17.887069 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:46:17.887087 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:46:18.002002 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-11-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:46:18.002159 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:46:18.002176 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-YYkPY1-YV0V-Afba-Bksc-BIap-C0b1-XJToPw', 'dm-uuid-CRYPT-LUKS2-34a00b1693eb41a48240b70c6fb1290d-YYkPY1-YV0V-Afba-Bksc-BIap-C0b1-XJToPw'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:46:18.002225 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:46:18.002239 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2f59a7c8--f88e--51a3--9620--37640e0ff9b5-osd--block--2f59a7c8--f88e--51a3--9620--37640e0ff9b5', 'dm-uuid-LVM-oNDzr1Rndp1i5vNhITRGHxSNPadq9yP2YYkPY1YV0VAfbaBkscBIapC0b1XJToPw'], 'uuids': ['34a00b16-93eb-41a4-8240-b70c6fb1290d'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f5862b72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['YYkPY1-YV0V-Afba-Bksc-BIap-C0b1-XJToPw']}}, 'ansible_loop_var': 'item'})  2026-04-09 05:46:18.002274 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0XRV4m-9HLY-z9SL-9jLS-42la-BMLC-ZGW5lb', 'scsi-0QEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad', 'scsi-SQEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '162ed735', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1db77c01--2d77--5e1e--8d0a--4e535706b141-osd--block--1db77c01--2d77--5e1e--8d0a--4e535706b141']}}, 'ansible_loop_var': 'item'})  2026-04-09 05:46:18.002290 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:46:18.002310 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0bd1f840', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part16', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part14', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part15', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part1', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:46:18.002331 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:46:18.002351 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:46:57.180395 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-UhLn89-2OVF-egRX-foAo-cTUn-q1hU-BwAqbQ', 'dm-uuid-CRYPT-LUKS2-9adc505859dc41deadf6afc54c646e02-UhLn89-2OVF-egRX-foAo-cTUn-q1hU-BwAqbQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:46:57.180513 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:46:57.180531 | orchestrator | 2026-04-09 05:46:57.180544 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-09 05:46:57.180557 | orchestrator | Thursday 09 April 2026 05:46:19 +0000 (0:00:01.440) 0:35:21.013 ******** 2026-04-09 05:46:57.180568 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:46:57.180605 | orchestrator | 2026-04-09 05:46:57.180617 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-09 05:46:57.180628 | orchestrator | Thursday 09 April 2026 05:46:20 +0000 (0:00:01.498) 0:35:22.511 ******** 2026-04-09 05:46:57.180639 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:46:57.180650 | orchestrator | 2026-04-09 05:46:57.180661 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 05:46:57.180672 | orchestrator | Thursday 09 April 2026 05:46:21 +0000 (0:00:01.122) 0:35:23.634 ******** 2026-04-09 05:46:57.180682 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:46:57.180693 | orchestrator | 2026-04-09 05:46:57.180704 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 05:46:57.180772 | orchestrator | Thursday 09 April 2026 05:46:23 +0000 (0:00:01.449) 0:35:25.083 ******** 2026-04-09 05:46:57.180785 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:46:57.180796 | orchestrator | 2026-04-09 05:46:57.180807 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 05:46:57.180832 | orchestrator | Thursday 09 April 2026 05:46:24 +0000 (0:00:01.179) 0:35:26.262 ******** 2026-04-09 05:46:57.180843 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:46:57.180854 | orchestrator | 2026-04-09 05:46:57.180865 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 05:46:57.180876 | orchestrator | Thursday 09 April 2026 05:46:25 +0000 (0:00:01.224) 0:35:27.487 ******** 2026-04-09 05:46:57.180887 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:46:57.180897 | orchestrator | 2026-04-09 05:46:57.180908 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 05:46:57.180919 | orchestrator | Thursday 09 April 2026 05:46:26 +0000 (0:00:01.138) 0:35:28.626 ******** 2026-04-09 05:46:57.180930 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-09 05:46:57.180941 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-09 05:46:57.180952 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-09 05:46:57.180963 | orchestrator | 2026-04-09 05:46:57.180974 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 05:46:57.180984 | orchestrator | Thursday 09 April 2026 05:46:28 +0000 (0:00:02.030) 0:35:30.657 ******** 2026-04-09 05:46:57.180996 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-09 05:46:57.181007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-09 05:46:57.181018 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-09 05:46:57.181029 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:46:57.181039 | orchestrator | 2026-04-09 05:46:57.181050 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-09 05:46:57.181061 | orchestrator | Thursday 09 April 2026 05:46:29 +0000 (0:00:01.195) 0:35:31.852 ******** 2026-04-09 05:46:57.181072 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-04-09 05:46:57.181084 | orchestrator | 2026-04-09 05:46:57.181096 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 05:46:57.181108 | orchestrator | Thursday 09 April 2026 05:46:31 +0000 (0:00:01.293) 0:35:33.146 ******** 2026-04-09 05:46:57.181118 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:46:57.181129 | orchestrator | 2026-04-09 05:46:57.181140 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 05:46:57.181151 | orchestrator | Thursday 09 April 2026 05:46:32 +0000 (0:00:01.126) 0:35:34.273 ******** 2026-04-09 05:46:57.181162 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:46:57.181173 | orchestrator | 2026-04-09 05:46:57.181183 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 05:46:57.181194 | orchestrator | Thursday 09 April 2026 05:46:33 +0000 (0:00:01.525) 0:35:35.799 ******** 2026-04-09 05:46:57.181205 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:46:57.181216 | orchestrator | 2026-04-09 05:46:57.181235 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 05:46:57.181246 | orchestrator | Thursday 09 April 2026 05:46:35 +0000 (0:00:01.127) 0:35:36.926 ******** 2026-04-09 05:46:57.181262 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:46:57.181280 | orchestrator | 2026-04-09 05:46:57.181292 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 05:46:57.181303 | orchestrator | Thursday 09 April 2026 05:46:36 +0000 (0:00:01.256) 0:35:38.183 ******** 2026-04-09 05:46:57.181313 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 05:46:57.181341 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 05:46:57.181353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 05:46:57.181364 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:46:57.181375 | orchestrator | 2026-04-09 05:46:57.181385 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 05:46:57.181396 | orchestrator | Thursday 09 April 2026 05:46:37 +0000 (0:00:01.408) 0:35:39.592 ******** 2026-04-09 05:46:57.181407 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 05:46:57.181418 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 05:46:57.181428 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 05:46:57.181439 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:46:57.181450 | orchestrator | 2026-04-09 05:46:57.181461 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 05:46:57.181472 | orchestrator | Thursday 09 April 2026 05:46:39 +0000 (0:00:01.395) 0:35:40.987 ******** 2026-04-09 05:46:57.181482 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 05:46:57.181493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 05:46:57.181504 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 05:46:57.181514 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:46:57.181525 | orchestrator | 2026-04-09 05:46:57.181536 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 05:46:57.181547 | orchestrator | Thursday 09 April 2026 05:46:40 +0000 (0:00:01.385) 0:35:42.373 ******** 2026-04-09 05:46:57.181558 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:46:57.181569 | orchestrator | 2026-04-09 05:46:57.181580 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 05:46:57.181590 | orchestrator | Thursday 09 April 2026 05:46:41 +0000 (0:00:01.193) 0:35:43.566 ******** 2026-04-09 05:46:57.181601 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-09 05:46:57.181612 | orchestrator | 2026-04-09 05:46:57.181623 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-09 05:46:57.181634 | orchestrator | Thursday 09 April 2026 05:46:43 +0000 (0:00:01.450) 0:35:45.017 ******** 2026-04-09 05:46:57.181645 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:46:57.181656 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:46:57.181672 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:46:57.181683 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-09 05:46:57.181694 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 05:46:57.181704 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 05:46:57.181735 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 05:46:57.181747 | orchestrator | 2026-04-09 05:46:57.181758 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-09 05:46:57.181768 | orchestrator | Thursday 09 April 2026 05:46:45 +0000 (0:00:02.273) 0:35:47.290 ******** 2026-04-09 05:46:57.181780 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:46:57.181798 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:46:57.181809 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:46:57.181819 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-09 05:46:57.181830 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 05:46:57.181841 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 05:46:57.181852 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 05:46:57.181864 | orchestrator | 2026-04-09 05:46:57.181875 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-04-09 05:46:57.181885 | orchestrator | Thursday 09 April 2026 05:46:48 +0000 (0:00:02.730) 0:35:50.021 ******** 2026-04-09 05:46:57.181896 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:46:57.181907 | orchestrator | 2026-04-09 05:46:57.181918 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-04-09 05:46:57.181929 | orchestrator | Thursday 09 April 2026 05:46:49 +0000 (0:00:01.474) 0:35:51.495 ******** 2026-04-09 05:46:57.181940 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:46:57.181951 | orchestrator | 2026-04-09 05:46:57.181962 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-04-09 05:46:57.181973 | orchestrator | Thursday 09 April 2026 05:46:50 +0000 (0:00:01.122) 0:35:52.617 ******** 2026-04-09 05:46:57.181984 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:46:57.181995 | orchestrator | 2026-04-09 05:46:57.182006 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-04-09 05:46:57.182076 | orchestrator | Thursday 09 April 2026 05:46:52 +0000 (0:00:01.260) 0:35:53.878 ******** 2026-04-09 05:46:57.182091 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-04-09 05:46:57.182103 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-04-09 05:46:57.182114 | orchestrator | 2026-04-09 05:46:57.182125 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 05:46:57.182136 | orchestrator | Thursday 09 April 2026 05:46:56 +0000 (0:00:04.031) 0:35:57.910 ******** 2026-04-09 05:46:57.182147 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-04-09 05:46:57.182159 | orchestrator | 2026-04-09 05:46:57.182170 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 05:46:57.182189 | orchestrator | Thursday 09 April 2026 05:46:57 +0000 (0:00:01.130) 0:35:59.040 ******** 2026-04-09 05:47:48.055456 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-04-09 05:47:48.055572 | orchestrator | 2026-04-09 05:47:48.055612 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 05:47:48.055626 | orchestrator | Thursday 09 April 2026 05:46:58 +0000 (0:00:01.121) 0:36:00.162 ******** 2026-04-09 05:47:48.055638 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:47:48.055650 | orchestrator | 2026-04-09 05:47:48.055661 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 05:47:48.055673 | orchestrator | Thursday 09 April 2026 05:46:59 +0000 (0:00:01.170) 0:36:01.332 ******** 2026-04-09 05:47:48.055684 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:47:48.055696 | orchestrator | 2026-04-09 05:47:48.055758 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 05:47:48.055771 | orchestrator | Thursday 09 April 2026 05:47:00 +0000 (0:00:01.520) 0:36:02.853 ******** 2026-04-09 05:47:48.055782 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:47:48.055810 | orchestrator | 2026-04-09 05:47:48.055845 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 05:47:48.055856 | orchestrator | Thursday 09 April 2026 05:47:02 +0000 (0:00:01.534) 0:36:04.387 ******** 2026-04-09 05:47:48.055867 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:47:48.055878 | orchestrator | 2026-04-09 05:47:48.055913 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 05:47:48.055925 | orchestrator | Thursday 09 April 2026 05:47:04 +0000 (0:00:01.581) 0:36:05.968 ******** 2026-04-09 05:47:48.055936 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:47:48.055947 | orchestrator | 2026-04-09 05:47:48.055958 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 05:47:48.055969 | orchestrator | Thursday 09 April 2026 05:47:05 +0000 (0:00:01.189) 0:36:07.158 ******** 2026-04-09 05:47:48.055980 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:47:48.055991 | orchestrator | 2026-04-09 05:47:48.056005 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 05:47:48.056018 | orchestrator | Thursday 09 April 2026 05:47:06 +0000 (0:00:01.117) 0:36:08.276 ******** 2026-04-09 05:47:48.056031 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:47:48.056044 | orchestrator | 2026-04-09 05:47:48.056057 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 05:47:48.056070 | orchestrator | Thursday 09 April 2026 05:47:07 +0000 (0:00:01.133) 0:36:09.410 ******** 2026-04-09 05:47:48.056098 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:47:48.056111 | orchestrator | 2026-04-09 05:47:48.056124 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 05:47:48.056138 | orchestrator | Thursday 09 April 2026 05:47:09 +0000 (0:00:01.499) 0:36:10.909 ******** 2026-04-09 05:47:48.056151 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:47:48.056164 | orchestrator | 2026-04-09 05:47:48.056176 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 05:47:48.056190 | orchestrator | Thursday 09 April 2026 05:47:10 +0000 (0:00:01.503) 0:36:12.412 ******** 2026-04-09 05:47:48.056204 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:47:48.056216 | orchestrator | 2026-04-09 05:47:48.056228 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 05:47:48.056242 | orchestrator | Thursday 09 April 2026 05:47:11 +0000 (0:00:01.157) 0:36:13.570 ******** 2026-04-09 05:47:48.056254 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:47:48.056267 | orchestrator | 2026-04-09 05:47:48.056280 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 05:47:48.056293 | orchestrator | Thursday 09 April 2026 05:47:12 +0000 (0:00:01.219) 0:36:14.790 ******** 2026-04-09 05:47:48.056306 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:47:48.056319 | orchestrator | 2026-04-09 05:47:48.056332 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 05:47:48.056346 | orchestrator | Thursday 09 April 2026 05:47:14 +0000 (0:00:01.164) 0:36:15.954 ******** 2026-04-09 05:47:48.056359 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:47:48.056372 | orchestrator | 2026-04-09 05:47:48.056385 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 05:47:48.056395 | orchestrator | Thursday 09 April 2026 05:47:15 +0000 (0:00:01.159) 0:36:17.114 ******** 2026-04-09 05:47:48.056406 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:47:48.056417 | orchestrator | 2026-04-09 05:47:48.056428 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 05:47:48.056439 | orchestrator | Thursday 09 April 2026 05:47:16 +0000 (0:00:01.146) 0:36:18.261 ******** 2026-04-09 05:47:48.056450 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:47:48.056475 | orchestrator | 2026-04-09 05:47:48.056486 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 05:47:48.056508 | orchestrator | Thursday 09 April 2026 05:47:17 +0000 (0:00:01.100) 0:36:19.361 ******** 2026-04-09 05:47:48.056519 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:47:48.056530 | orchestrator | 2026-04-09 05:47:48.056541 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 05:47:48.056552 | orchestrator | Thursday 09 April 2026 05:47:18 +0000 (0:00:01.118) 0:36:20.480 ******** 2026-04-09 05:47:48.056563 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:47:48.056586 | orchestrator | 2026-04-09 05:47:48.056606 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 05:47:48.056618 | orchestrator | Thursday 09 April 2026 05:47:19 +0000 (0:00:01.114) 0:36:21.595 ******** 2026-04-09 05:47:48.056629 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:47:48.056640 | orchestrator | 2026-04-09 05:47:48.056651 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 05:47:48.056662 | orchestrator | Thursday 09 April 2026 05:47:20 +0000 (0:00:01.151) 0:36:22.746 ******** 2026-04-09 05:47:48.056673 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:47:48.056684 | orchestrator | 2026-04-09 05:47:48.056695 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-09 05:47:48.056726 | orchestrator | Thursday 09 April 2026 05:47:22 +0000 (0:00:01.285) 0:36:24.031 ******** 2026-04-09 05:47:48.056738 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:47:48.056749 | orchestrator | 2026-04-09 05:47:48.056778 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-09 05:47:48.056803 | orchestrator | Thursday 09 April 2026 05:47:23 +0000 (0:00:01.122) 0:36:25.154 ******** 2026-04-09 05:47:48.056815 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:47:48.056826 | orchestrator | 2026-04-09 05:47:48.056837 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-09 05:47:48.056848 | orchestrator | Thursday 09 April 2026 05:47:24 +0000 (0:00:01.155) 0:36:26.310 ******** 2026-04-09 05:47:48.056859 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:47:48.056870 | orchestrator | 2026-04-09 05:47:48.056881 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-09 05:47:48.056892 | orchestrator | Thursday 09 April 2026 05:47:25 +0000 (0:00:01.134) 0:36:27.445 ******** 2026-04-09 05:47:48.056903 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:47:48.056914 | orchestrator | 2026-04-09 05:47:48.056925 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-09 05:47:48.056936 | orchestrator | Thursday 09 April 2026 05:47:26 +0000 (0:00:01.179) 0:36:28.624 ******** 2026-04-09 05:47:48.056947 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:47:48.056958 | orchestrator | 2026-04-09 05:47:48.056969 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-09 05:47:48.056980 | orchestrator | Thursday 09 April 2026 05:47:27 +0000 (0:00:01.126) 0:36:29.751 ******** 2026-04-09 05:47:48.056991 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:47:48.057002 | orchestrator | 2026-04-09 05:47:48.057013 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-09 05:47:48.057024 | orchestrator | Thursday 09 April 2026 05:47:29 +0000 (0:00:01.129) 0:36:30.881 ******** 2026-04-09 05:47:48.057035 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:47:48.057046 | orchestrator | 2026-04-09 05:47:48.057057 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-09 05:47:48.057069 | orchestrator | Thursday 09 April 2026 05:47:30 +0000 (0:00:01.126) 0:36:32.008 ******** 2026-04-09 05:47:48.057080 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:47:48.057091 | orchestrator | 2026-04-09 05:47:48.057102 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-09 05:47:48.057113 | orchestrator | Thursday 09 April 2026 05:47:31 +0000 (0:00:01.161) 0:36:33.170 ******** 2026-04-09 05:47:48.057124 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:47:48.057135 | orchestrator | 2026-04-09 05:47:48.057146 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-09 05:47:48.057163 | orchestrator | Thursday 09 April 2026 05:47:32 +0000 (0:00:01.186) 0:36:34.356 ******** 2026-04-09 05:47:48.057175 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:47:48.057186 | orchestrator | 2026-04-09 05:47:48.057196 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-09 05:47:48.057207 | orchestrator | Thursday 09 April 2026 05:47:33 +0000 (0:00:01.114) 0:36:35.471 ******** 2026-04-09 05:47:48.057218 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:47:48.057237 | orchestrator | 2026-04-09 05:47:48.057248 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-09 05:47:48.057259 | orchestrator | Thursday 09 April 2026 05:47:34 +0000 (0:00:01.133) 0:36:36.605 ******** 2026-04-09 05:47:48.057270 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:47:48.057281 | orchestrator | 2026-04-09 05:47:48.057292 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-09 05:47:48.057303 | orchestrator | Thursday 09 April 2026 05:47:36 +0000 (0:00:01.266) 0:36:37.872 ******** 2026-04-09 05:47:48.057314 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:47:48.057325 | orchestrator | 2026-04-09 05:47:48.057336 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-09 05:47:48.057347 | orchestrator | Thursday 09 April 2026 05:47:37 +0000 (0:00:01.928) 0:36:39.801 ******** 2026-04-09 05:47:48.057358 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:47:48.057369 | orchestrator | 2026-04-09 05:47:48.057380 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-09 05:47:48.057391 | orchestrator | Thursday 09 April 2026 05:47:40 +0000 (0:00:02.232) 0:36:42.033 ******** 2026-04-09 05:47:48.057402 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-04-09 05:47:48.057413 | orchestrator | 2026-04-09 05:47:48.057423 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-09 05:47:48.057434 | orchestrator | Thursday 09 April 2026 05:47:41 +0000 (0:00:01.157) 0:36:43.191 ******** 2026-04-09 05:47:48.057445 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:47:48.057456 | orchestrator | 2026-04-09 05:47:48.057467 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-09 05:47:48.057478 | orchestrator | Thursday 09 April 2026 05:47:42 +0000 (0:00:01.119) 0:36:44.311 ******** 2026-04-09 05:47:48.057489 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:47:48.057500 | orchestrator | 2026-04-09 05:47:48.057511 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-09 05:47:48.057522 | orchestrator | Thursday 09 April 2026 05:47:43 +0000 (0:00:01.170) 0:36:45.481 ******** 2026-04-09 05:47:48.057533 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 05:47:48.057544 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 05:47:48.057555 | orchestrator | 2026-04-09 05:47:48.057566 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-09 05:47:48.057577 | orchestrator | Thursday 09 April 2026 05:47:45 +0000 (0:00:01.851) 0:36:47.333 ******** 2026-04-09 05:47:48.057588 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:47:48.057599 | orchestrator | 2026-04-09 05:47:48.057610 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-09 05:47:48.057620 | orchestrator | Thursday 09 April 2026 05:47:46 +0000 (0:00:01.450) 0:36:48.784 ******** 2026-04-09 05:47:48.057631 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:47:48.057642 | orchestrator | 2026-04-09 05:47:48.057654 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-09 05:47:48.057671 | orchestrator | Thursday 09 April 2026 05:47:48 +0000 (0:00:01.130) 0:36:49.914 ******** 2026-04-09 05:48:34.055256 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:48:34.055341 | orchestrator | 2026-04-09 05:48:34.055348 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-09 05:48:34.055353 | orchestrator | Thursday 09 April 2026 05:47:49 +0000 (0:00:01.141) 0:36:51.055 ******** 2026-04-09 05:48:34.055358 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:48:34.055362 | orchestrator | 2026-04-09 05:48:34.055367 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-09 05:48:34.055373 | orchestrator | Thursday 09 April 2026 05:47:50 +0000 (0:00:01.158) 0:36:52.214 ******** 2026-04-09 05:48:34.055380 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-04-09 05:48:34.055386 | orchestrator | 2026-04-09 05:48:34.055414 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-09 05:48:34.055421 | orchestrator | Thursday 09 April 2026 05:47:51 +0000 (0:00:01.255) 0:36:53.469 ******** 2026-04-09 05:48:34.055427 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:48:34.055435 | orchestrator | 2026-04-09 05:48:34.055442 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-09 05:48:34.055450 | orchestrator | Thursday 09 April 2026 05:47:53 +0000 (0:00:01.812) 0:36:55.281 ******** 2026-04-09 05:48:34.055456 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 05:48:34.055462 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 05:48:34.055466 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 05:48:34.055470 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:48:34.055474 | orchestrator | 2026-04-09 05:48:34.055478 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-09 05:48:34.055482 | orchestrator | Thursday 09 April 2026 05:47:54 +0000 (0:00:01.198) 0:36:56.480 ******** 2026-04-09 05:48:34.055485 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:48:34.055489 | orchestrator | 2026-04-09 05:48:34.055493 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-09 05:48:34.055497 | orchestrator | Thursday 09 April 2026 05:47:55 +0000 (0:00:01.132) 0:36:57.612 ******** 2026-04-09 05:48:34.055501 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:48:34.055505 | orchestrator | 2026-04-09 05:48:34.055518 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-09 05:48:34.055522 | orchestrator | Thursday 09 April 2026 05:47:56 +0000 (0:00:01.153) 0:36:58.765 ******** 2026-04-09 05:48:34.055526 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:48:34.055530 | orchestrator | 2026-04-09 05:48:34.055534 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-09 05:48:34.055538 | orchestrator | Thursday 09 April 2026 05:47:58 +0000 (0:00:01.152) 0:36:59.918 ******** 2026-04-09 05:48:34.055541 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:48:34.055545 | orchestrator | 2026-04-09 05:48:34.055549 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-09 05:48:34.055553 | orchestrator | Thursday 09 April 2026 05:47:59 +0000 (0:00:01.153) 0:37:01.071 ******** 2026-04-09 05:48:34.055557 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:48:34.055560 | orchestrator | 2026-04-09 05:48:34.055564 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-09 05:48:34.055568 | orchestrator | Thursday 09 April 2026 05:48:00 +0000 (0:00:01.117) 0:37:02.188 ******** 2026-04-09 05:48:34.055572 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:48:34.055576 | orchestrator | 2026-04-09 05:48:34.055580 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-09 05:48:34.055584 | orchestrator | Thursday 09 April 2026 05:48:02 +0000 (0:00:02.432) 0:37:04.621 ******** 2026-04-09 05:48:34.055588 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:48:34.055591 | orchestrator | 2026-04-09 05:48:34.055595 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-09 05:48:34.055599 | orchestrator | Thursday 09 April 2026 05:48:03 +0000 (0:00:01.151) 0:37:05.773 ******** 2026-04-09 05:48:34.055603 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-04-09 05:48:34.055607 | orchestrator | 2026-04-09 05:48:34.055611 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-09 05:48:34.055614 | orchestrator | Thursday 09 April 2026 05:48:05 +0000 (0:00:01.139) 0:37:06.913 ******** 2026-04-09 05:48:34.055618 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:48:34.055622 | orchestrator | 2026-04-09 05:48:34.055626 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-09 05:48:34.055630 | orchestrator | Thursday 09 April 2026 05:48:06 +0000 (0:00:01.188) 0:37:08.102 ******** 2026-04-09 05:48:34.055640 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:48:34.055644 | orchestrator | 2026-04-09 05:48:34.055648 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-09 05:48:34.055652 | orchestrator | Thursday 09 April 2026 05:48:07 +0000 (0:00:01.139) 0:37:09.242 ******** 2026-04-09 05:48:34.055655 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:48:34.055659 | orchestrator | 2026-04-09 05:48:34.055663 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-09 05:48:34.055667 | orchestrator | Thursday 09 April 2026 05:48:08 +0000 (0:00:01.125) 0:37:10.368 ******** 2026-04-09 05:48:34.055671 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:48:34.055674 | orchestrator | 2026-04-09 05:48:34.055678 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-09 05:48:34.055682 | orchestrator | Thursday 09 April 2026 05:48:09 +0000 (0:00:01.157) 0:37:11.525 ******** 2026-04-09 05:48:34.055686 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:48:34.055690 | orchestrator | 2026-04-09 05:48:34.055693 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-09 05:48:34.055697 | orchestrator | Thursday 09 April 2026 05:48:10 +0000 (0:00:01.155) 0:37:12.681 ******** 2026-04-09 05:48:34.055701 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:48:34.055734 | orchestrator | 2026-04-09 05:48:34.055748 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-09 05:48:34.055753 | orchestrator | Thursday 09 April 2026 05:48:11 +0000 (0:00:01.130) 0:37:13.811 ******** 2026-04-09 05:48:34.055757 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:48:34.055760 | orchestrator | 2026-04-09 05:48:34.055764 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-09 05:48:34.055768 | orchestrator | Thursday 09 April 2026 05:48:13 +0000 (0:00:01.150) 0:37:14.962 ******** 2026-04-09 05:48:34.055772 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:48:34.055776 | orchestrator | 2026-04-09 05:48:34.055780 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-09 05:48:34.055784 | orchestrator | Thursday 09 April 2026 05:48:14 +0000 (0:00:01.202) 0:37:16.165 ******** 2026-04-09 05:48:34.055788 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:48:34.055791 | orchestrator | 2026-04-09 05:48:34.055795 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-09 05:48:34.055799 | orchestrator | Thursday 09 April 2026 05:48:15 +0000 (0:00:01.209) 0:37:17.375 ******** 2026-04-09 05:48:34.055803 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-04-09 05:48:34.055807 | orchestrator | 2026-04-09 05:48:34.055811 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-09 05:48:34.055815 | orchestrator | Thursday 09 April 2026 05:48:16 +0000 (0:00:01.141) 0:37:18.516 ******** 2026-04-09 05:48:34.055819 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-04-09 05:48:34.055824 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-09 05:48:34.055829 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-09 05:48:34.055834 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-09 05:48:34.055838 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-09 05:48:34.055843 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-09 05:48:34.055847 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-09 05:48:34.055852 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-09 05:48:34.055857 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 05:48:34.055861 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 05:48:34.055866 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 05:48:34.055874 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 05:48:34.055879 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 05:48:34.055888 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 05:48:34.055892 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-04-09 05:48:34.055897 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-04-09 05:48:34.055901 | orchestrator | 2026-04-09 05:48:34.055906 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-09 05:48:34.055910 | orchestrator | Thursday 09 April 2026 05:48:23 +0000 (0:00:06.539) 0:37:25.056 ******** 2026-04-09 05:48:34.055915 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-04-09 05:48:34.055920 | orchestrator | 2026-04-09 05:48:34.055924 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-09 05:48:34.055929 | orchestrator | Thursday 09 April 2026 05:48:24 +0000 (0:00:01.623) 0:37:26.679 ******** 2026-04-09 05:48:34.055936 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 05:48:34.055944 | orchestrator | 2026-04-09 05:48:34.055950 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-09 05:48:34.055957 | orchestrator | Thursday 09 April 2026 05:48:26 +0000 (0:00:01.522) 0:37:28.202 ******** 2026-04-09 05:48:34.055964 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 05:48:34.055970 | orchestrator | 2026-04-09 05:48:34.055976 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-09 05:48:34.055983 | orchestrator | Thursday 09 April 2026 05:48:28 +0000 (0:00:01.991) 0:37:30.193 ******** 2026-04-09 05:48:34.055989 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:48:34.055996 | orchestrator | 2026-04-09 05:48:34.056002 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-09 05:48:34.056010 | orchestrator | Thursday 09 April 2026 05:48:29 +0000 (0:00:01.121) 0:37:31.315 ******** 2026-04-09 05:48:34.056017 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:48:34.056023 | orchestrator | 2026-04-09 05:48:34.056030 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-09 05:48:34.056036 | orchestrator | Thursday 09 April 2026 05:48:30 +0000 (0:00:01.113) 0:37:32.428 ******** 2026-04-09 05:48:34.056040 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:48:34.056045 | orchestrator | 2026-04-09 05:48:34.056050 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-09 05:48:34.056054 | orchestrator | Thursday 09 April 2026 05:48:31 +0000 (0:00:01.122) 0:37:33.551 ******** 2026-04-09 05:48:34.056059 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:48:34.056063 | orchestrator | 2026-04-09 05:48:34.056068 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-09 05:48:34.056072 | orchestrator | Thursday 09 April 2026 05:48:32 +0000 (0:00:01.118) 0:37:34.670 ******** 2026-04-09 05:48:34.056077 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:48:34.056081 | orchestrator | 2026-04-09 05:48:34.056086 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-09 05:48:34.056091 | orchestrator | Thursday 09 April 2026 05:48:33 +0000 (0:00:01.099) 0:37:35.770 ******** 2026-04-09 05:48:34.056095 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:48:34.056100 | orchestrator | 2026-04-09 05:48:34.056108 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-09 05:49:24.502613 | orchestrator | Thursday 09 April 2026 05:48:35 +0000 (0:00:01.148) 0:37:36.918 ******** 2026-04-09 05:49:24.502794 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:49:24.502824 | orchestrator | 2026-04-09 05:49:24.502844 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-09 05:49:24.502865 | orchestrator | Thursday 09 April 2026 05:48:36 +0000 (0:00:01.138) 0:37:38.057 ******** 2026-04-09 05:49:24.502880 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:49:24.502915 | orchestrator | 2026-04-09 05:49:24.502928 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-09 05:49:24.502939 | orchestrator | Thursday 09 April 2026 05:48:37 +0000 (0:00:01.101) 0:37:39.159 ******** 2026-04-09 05:49:24.502950 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:49:24.502961 | orchestrator | 2026-04-09 05:49:24.502972 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-09 05:49:24.502983 | orchestrator | Thursday 09 April 2026 05:48:38 +0000 (0:00:01.104) 0:37:40.263 ******** 2026-04-09 05:49:24.502994 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:49:24.503005 | orchestrator | 2026-04-09 05:49:24.503016 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-09 05:49:24.503027 | orchestrator | Thursday 09 April 2026 05:48:39 +0000 (0:00:01.134) 0:37:41.398 ******** 2026-04-09 05:49:24.503038 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:49:24.503049 | orchestrator | 2026-04-09 05:49:24.503060 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-09 05:49:24.503071 | orchestrator | Thursday 09 April 2026 05:48:40 +0000 (0:00:01.238) 0:37:42.637 ******** 2026-04-09 05:49:24.503082 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-09 05:49:24.503093 | orchestrator | 2026-04-09 05:49:24.503104 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-09 05:49:24.503115 | orchestrator | Thursday 09 April 2026 05:48:45 +0000 (0:00:04.495) 0:37:47.132 ******** 2026-04-09 05:49:24.503126 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 05:49:24.503138 | orchestrator | 2026-04-09 05:49:24.503164 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-09 05:49:24.503178 | orchestrator | Thursday 09 April 2026 05:48:46 +0000 (0:00:01.197) 0:37:48.330 ******** 2026-04-09 05:49:24.503193 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-04-09 05:49:24.503209 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-04-09 05:49:24.503224 | orchestrator | 2026-04-09 05:49:24.503238 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-09 05:49:24.503250 | orchestrator | Thursday 09 April 2026 05:48:53 +0000 (0:00:07.529) 0:37:55.859 ******** 2026-04-09 05:49:24.503263 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:49:24.503276 | orchestrator | 2026-04-09 05:49:24.503289 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-09 05:49:24.503302 | orchestrator | Thursday 09 April 2026 05:48:55 +0000 (0:00:01.133) 0:37:56.993 ******** 2026-04-09 05:49:24.503315 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:49:24.503327 | orchestrator | 2026-04-09 05:49:24.503339 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 05:49:24.503353 | orchestrator | Thursday 09 April 2026 05:48:56 +0000 (0:00:01.162) 0:37:58.156 ******** 2026-04-09 05:49:24.503366 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:49:24.503378 | orchestrator | 2026-04-09 05:49:24.503392 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 05:49:24.503405 | orchestrator | Thursday 09 April 2026 05:48:57 +0000 (0:00:01.161) 0:37:59.317 ******** 2026-04-09 05:49:24.503418 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:49:24.503430 | orchestrator | 2026-04-09 05:49:24.503450 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 05:49:24.503463 | orchestrator | Thursday 09 April 2026 05:48:58 +0000 (0:00:01.155) 0:38:00.472 ******** 2026-04-09 05:49:24.503476 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:49:24.503488 | orchestrator | 2026-04-09 05:49:24.503500 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 05:49:24.503513 | orchestrator | Thursday 09 April 2026 05:48:59 +0000 (0:00:01.137) 0:38:01.610 ******** 2026-04-09 05:49:24.503526 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:49:24.503538 | orchestrator | 2026-04-09 05:49:24.503551 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 05:49:24.503562 | orchestrator | Thursday 09 April 2026 05:49:01 +0000 (0:00:01.287) 0:38:02.898 ******** 2026-04-09 05:49:24.503573 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 05:49:24.503584 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 05:49:24.503595 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 05:49:24.503606 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:49:24.503617 | orchestrator | 2026-04-09 05:49:24.503628 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 05:49:24.503658 | orchestrator | Thursday 09 April 2026 05:49:02 +0000 (0:00:01.401) 0:38:04.299 ******** 2026-04-09 05:49:24.503670 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 05:49:24.503681 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 05:49:24.503692 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 05:49:24.503726 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:49:24.503737 | orchestrator | 2026-04-09 05:49:24.503748 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 05:49:24.503759 | orchestrator | Thursday 09 April 2026 05:49:04 +0000 (0:00:01.764) 0:38:06.064 ******** 2026-04-09 05:49:24.503770 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 05:49:24.503781 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 05:49:24.503792 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 05:49:24.503803 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:49:24.503814 | orchestrator | 2026-04-09 05:49:24.503825 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 05:49:24.503836 | orchestrator | Thursday 09 April 2026 05:49:06 +0000 (0:00:01.884) 0:38:07.949 ******** 2026-04-09 05:49:24.503847 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:49:24.503858 | orchestrator | 2026-04-09 05:49:24.503869 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 05:49:24.503879 | orchestrator | Thursday 09 April 2026 05:49:07 +0000 (0:00:01.231) 0:38:09.180 ******** 2026-04-09 05:49:24.503890 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-09 05:49:24.503901 | orchestrator | 2026-04-09 05:49:24.503912 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-09 05:49:24.503922 | orchestrator | Thursday 09 April 2026 05:49:08 +0000 (0:00:01.357) 0:38:10.538 ******** 2026-04-09 05:49:24.503933 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:49:24.503944 | orchestrator | 2026-04-09 05:49:24.503955 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-09 05:49:24.503965 | orchestrator | Thursday 09 April 2026 05:49:10 +0000 (0:00:01.764) 0:38:12.302 ******** 2026-04-09 05:49:24.503976 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:49:24.503987 | orchestrator | 2026-04-09 05:49:24.503998 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-09 05:49:24.504014 | orchestrator | Thursday 09 April 2026 05:49:11 +0000 (0:00:01.146) 0:38:13.448 ******** 2026-04-09 05:49:24.504025 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:49:24.504036 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:49:24.504055 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:49:24.504066 | orchestrator | 2026-04-09 05:49:24.504077 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-09 05:49:24.504088 | orchestrator | Thursday 09 April 2026 05:49:13 +0000 (0:00:01.702) 0:38:15.151 ******** 2026-04-09 05:49:24.504098 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3 2026-04-09 05:49:24.504109 | orchestrator | 2026-04-09 05:49:24.504120 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-09 05:49:24.504130 | orchestrator | Thursday 09 April 2026 05:49:14 +0000 (0:00:01.544) 0:38:16.695 ******** 2026-04-09 05:49:24.504141 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:49:24.504152 | orchestrator | 2026-04-09 05:49:24.504163 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-09 05:49:24.504174 | orchestrator | Thursday 09 April 2026 05:49:15 +0000 (0:00:01.135) 0:38:17.831 ******** 2026-04-09 05:49:24.504184 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:49:24.504195 | orchestrator | 2026-04-09 05:49:24.504205 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-09 05:49:24.504216 | orchestrator | Thursday 09 April 2026 05:49:17 +0000 (0:00:01.115) 0:38:18.947 ******** 2026-04-09 05:49:24.504227 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:49:24.504238 | orchestrator | 2026-04-09 05:49:24.504248 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-09 05:49:24.504259 | orchestrator | Thursday 09 April 2026 05:49:18 +0000 (0:00:01.500) 0:38:20.447 ******** 2026-04-09 05:49:24.504270 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:49:24.504280 | orchestrator | 2026-04-09 05:49:24.504291 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-09 05:49:24.504302 | orchestrator | Thursday 09 April 2026 05:49:19 +0000 (0:00:01.141) 0:38:21.589 ******** 2026-04-09 05:49:24.504313 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-09 05:49:24.504324 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-09 05:49:24.504335 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-09 05:49:24.504345 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-09 05:49:24.504356 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-09 05:49:24.504367 | orchestrator | 2026-04-09 05:49:24.504378 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-09 05:49:24.504388 | orchestrator | Thursday 09 April 2026 05:49:23 +0000 (0:00:03.399) 0:38:24.988 ******** 2026-04-09 05:49:24.504399 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:49:24.504409 | orchestrator | 2026-04-09 05:49:24.504420 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-09 05:49:24.504431 | orchestrator | Thursday 09 April 2026 05:49:24 +0000 (0:00:01.121) 0:38:26.110 ******** 2026-04-09 05:49:24.504441 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3 2026-04-09 05:49:24.504452 | orchestrator | 2026-04-09 05:49:24.504463 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-09 05:50:31.652048 | orchestrator | Thursday 09 April 2026 05:49:25 +0000 (0:00:01.476) 0:38:27.587 ******** 2026-04-09 05:50:31.652170 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-09 05:50:31.652189 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-09 05:50:31.652203 | orchestrator | 2026-04-09 05:50:31.652215 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-09 05:50:31.652226 | orchestrator | Thursday 09 April 2026 05:49:27 +0000 (0:00:01.808) 0:38:29.396 ******** 2026-04-09 05:50:31.652237 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 05:50:31.652272 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 05:50:31.652285 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 05:50:31.652296 | orchestrator | 2026-04-09 05:50:31.652307 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-09 05:50:31.652318 | orchestrator | Thursday 09 April 2026 05:49:30 +0000 (0:00:03.392) 0:38:32.789 ******** 2026-04-09 05:50:31.652329 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-09 05:50:31.652340 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 05:50:31.652352 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:50:31.652362 | orchestrator | 2026-04-09 05:50:31.652373 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-09 05:50:31.652384 | orchestrator | Thursday 09 April 2026 05:49:32 +0000 (0:00:01.932) 0:38:34.722 ******** 2026-04-09 05:50:31.652395 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:50:31.652406 | orchestrator | 2026-04-09 05:50:31.652416 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-09 05:50:31.652427 | orchestrator | Thursday 09 April 2026 05:49:34 +0000 (0:00:01.322) 0:38:36.044 ******** 2026-04-09 05:50:31.652438 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:50:31.652449 | orchestrator | 2026-04-09 05:50:31.652460 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-09 05:50:31.652470 | orchestrator | Thursday 09 April 2026 05:49:35 +0000 (0:00:01.196) 0:38:37.241 ******** 2026-04-09 05:50:31.652481 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:50:31.652492 | orchestrator | 2026-04-09 05:50:31.652503 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-09 05:50:31.652528 | orchestrator | Thursday 09 April 2026 05:49:36 +0000 (0:00:01.128) 0:38:38.369 ******** 2026-04-09 05:50:31.652539 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3 2026-04-09 05:50:31.652550 | orchestrator | 2026-04-09 05:50:31.652561 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-09 05:50:31.652571 | orchestrator | Thursday 09 April 2026 05:49:37 +0000 (0:00:01.463) 0:38:39.832 ******** 2026-04-09 05:50:31.652582 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:50:31.652593 | orchestrator | 2026-04-09 05:50:31.652604 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-09 05:50:31.652614 | orchestrator | Thursday 09 April 2026 05:49:39 +0000 (0:00:01.447) 0:38:41.280 ******** 2026-04-09 05:50:31.652625 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:50:31.652636 | orchestrator | 2026-04-09 05:50:31.652646 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-09 05:50:31.652657 | orchestrator | Thursday 09 April 2026 05:49:43 +0000 (0:00:03.966) 0:38:45.247 ******** 2026-04-09 05:50:31.652668 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3 2026-04-09 05:50:31.652679 | orchestrator | 2026-04-09 05:50:31.652690 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-09 05:50:31.652733 | orchestrator | Thursday 09 April 2026 05:49:44 +0000 (0:00:01.618) 0:38:46.865 ******** 2026-04-09 05:50:31.652753 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:50:31.652773 | orchestrator | 2026-04-09 05:50:31.652793 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-09 05:50:31.652811 | orchestrator | Thursday 09 April 2026 05:49:46 +0000 (0:00:01.937) 0:38:48.803 ******** 2026-04-09 05:50:31.652822 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:50:31.652833 | orchestrator | 2026-04-09 05:50:31.652843 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-09 05:50:31.652854 | orchestrator | Thursday 09 April 2026 05:49:48 +0000 (0:00:01.919) 0:38:50.722 ******** 2026-04-09 05:50:31.652865 | orchestrator | ok: [testbed-node-3] 2026-04-09 05:50:31.652876 | orchestrator | 2026-04-09 05:50:31.652886 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-09 05:50:31.652897 | orchestrator | Thursday 09 April 2026 05:49:51 +0000 (0:00:02.212) 0:38:52.934 ******** 2026-04-09 05:50:31.652917 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:50:31.652928 | orchestrator | 2026-04-09 05:50:31.652939 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-09 05:50:31.652949 | orchestrator | Thursday 09 April 2026 05:49:52 +0000 (0:00:01.202) 0:38:54.137 ******** 2026-04-09 05:50:31.652960 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:50:31.652971 | orchestrator | 2026-04-09 05:50:31.652981 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-09 05:50:31.652992 | orchestrator | Thursday 09 April 2026 05:49:53 +0000 (0:00:01.153) 0:38:55.290 ******** 2026-04-09 05:50:31.653003 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-04-09 05:50:31.653014 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-04-09 05:50:31.653025 | orchestrator | 2026-04-09 05:50:31.653035 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-09 05:50:31.653046 | orchestrator | Thursday 09 April 2026 05:49:55 +0000 (0:00:01.839) 0:38:57.129 ******** 2026-04-09 05:50:31.653057 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-04-09 05:50:31.653068 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-04-09 05:50:31.653078 | orchestrator | 2026-04-09 05:50:31.653089 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-09 05:50:31.653100 | orchestrator | Thursday 09 April 2026 05:49:58 +0000 (0:00:02.838) 0:38:59.967 ******** 2026-04-09 05:50:31.653111 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-04-09 05:50:31.653139 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-04-09 05:50:31.653151 | orchestrator | 2026-04-09 05:50:31.653162 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-09 05:50:31.653173 | orchestrator | Thursday 09 April 2026 05:50:02 +0000 (0:00:04.690) 0:39:04.658 ******** 2026-04-09 05:50:31.653183 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:50:31.653194 | orchestrator | 2026-04-09 05:50:31.653205 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-09 05:50:31.653216 | orchestrator | Thursday 09 April 2026 05:50:04 +0000 (0:00:01.265) 0:39:05.923 ******** 2026-04-09 05:50:31.653226 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:50:31.653237 | orchestrator | 2026-04-09 05:50:31.653248 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-09 05:50:31.653258 | orchestrator | Thursday 09 April 2026 05:50:05 +0000 (0:00:01.254) 0:39:07.178 ******** 2026-04-09 05:50:31.653269 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:50:31.653280 | orchestrator | 2026-04-09 05:50:31.653291 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-04-09 05:50:31.653301 | orchestrator | Thursday 09 April 2026 05:50:06 +0000 (0:00:01.553) 0:39:08.732 ******** 2026-04-09 05:50:31.653312 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:50:31.653323 | orchestrator | 2026-04-09 05:50:31.653333 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-04-09 05:50:31.653344 | orchestrator | Thursday 09 April 2026 05:50:07 +0000 (0:00:01.134) 0:39:09.867 ******** 2026-04-09 05:50:31.653355 | orchestrator | skipping: [testbed-node-3] 2026-04-09 05:50:31.653366 | orchestrator | 2026-04-09 05:50:31.653377 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-04-09 05:50:31.653388 | orchestrator | Thursday 09 April 2026 05:50:09 +0000 (0:00:01.127) 0:39:10.994 ******** 2026-04-09 05:50:31.653399 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-04-09 05:50:31.653411 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-04-09 05:50:31.653422 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 05:50:31.653433 | orchestrator | 2026-04-09 05:50:31.653444 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-04-09 05:50:31.653454 | orchestrator | 2026-04-09 05:50:31.653471 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 05:50:31.653499 | orchestrator | Thursday 09 April 2026 05:50:17 +0000 (0:00:08.267) 0:39:19.261 ******** 2026-04-09 05:50:31.653518 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-04-09 05:50:31.653535 | orchestrator | 2026-04-09 05:50:31.653553 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-09 05:50:31.653570 | orchestrator | Thursday 09 April 2026 05:50:18 +0000 (0:00:01.248) 0:39:20.510 ******** 2026-04-09 05:50:31.653588 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:50:31.653606 | orchestrator | 2026-04-09 05:50:31.653624 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-09 05:50:31.653642 | orchestrator | Thursday 09 April 2026 05:50:20 +0000 (0:00:01.575) 0:39:22.086 ******** 2026-04-09 05:50:31.653659 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:50:31.653677 | orchestrator | 2026-04-09 05:50:31.653694 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 05:50:31.653742 | orchestrator | Thursday 09 April 2026 05:50:21 +0000 (0:00:01.139) 0:39:23.225 ******** 2026-04-09 05:50:31.653760 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:50:31.653777 | orchestrator | 2026-04-09 05:50:31.653795 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 05:50:31.653813 | orchestrator | Thursday 09 April 2026 05:50:22 +0000 (0:00:01.536) 0:39:24.762 ******** 2026-04-09 05:50:31.653832 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:50:31.653850 | orchestrator | 2026-04-09 05:50:31.653869 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-09 05:50:31.653884 | orchestrator | Thursday 09 April 2026 05:50:24 +0000 (0:00:01.161) 0:39:25.923 ******** 2026-04-09 05:50:31.653895 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:50:31.653906 | orchestrator | 2026-04-09 05:50:31.653917 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-09 05:50:31.653928 | orchestrator | Thursday 09 April 2026 05:50:25 +0000 (0:00:01.153) 0:39:27.077 ******** 2026-04-09 05:50:31.653939 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:50:31.653949 | orchestrator | 2026-04-09 05:50:31.653960 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-09 05:50:31.653971 | orchestrator | Thursday 09 April 2026 05:50:26 +0000 (0:00:01.194) 0:39:28.271 ******** 2026-04-09 05:50:31.653983 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:50:31.654002 | orchestrator | 2026-04-09 05:50:31.654101 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-09 05:50:31.654124 | orchestrator | Thursday 09 April 2026 05:50:27 +0000 (0:00:01.118) 0:39:29.390 ******** 2026-04-09 05:50:31.654141 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:50:31.654160 | orchestrator | 2026-04-09 05:50:31.654179 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-09 05:50:31.654197 | orchestrator | Thursday 09 April 2026 05:50:28 +0000 (0:00:01.151) 0:39:30.541 ******** 2026-04-09 05:50:31.654214 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:50:31.654233 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:50:31.654249 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:50:31.654265 | orchestrator | 2026-04-09 05:50:31.654283 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-09 05:50:31.654301 | orchestrator | Thursday 09 April 2026 05:50:30 +0000 (0:00:01.673) 0:39:32.215 ******** 2026-04-09 05:50:31.654320 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:50:31.654337 | orchestrator | 2026-04-09 05:50:31.654357 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-09 05:50:31.654392 | orchestrator | Thursday 09 April 2026 05:50:31 +0000 (0:00:01.296) 0:39:33.512 ******** 2026-04-09 05:50:55.871141 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:50:55.871259 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:50:55.871299 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:50:55.871312 | orchestrator | 2026-04-09 05:50:55.871324 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-09 05:50:55.871335 | orchestrator | Thursday 09 April 2026 05:50:34 +0000 (0:00:02.966) 0:39:36.478 ******** 2026-04-09 05:50:55.871347 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-09 05:50:55.871359 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-09 05:50:55.871370 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-09 05:50:55.871381 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:50:55.871393 | orchestrator | 2026-04-09 05:50:55.871404 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-09 05:50:55.871415 | orchestrator | Thursday 09 April 2026 05:50:36 +0000 (0:00:01.529) 0:39:38.007 ******** 2026-04-09 05:50:55.871428 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 05:50:55.871443 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 05:50:55.871470 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 05:50:55.871481 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:50:55.871492 | orchestrator | 2026-04-09 05:50:55.871503 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-09 05:50:55.871514 | orchestrator | Thursday 09 April 2026 05:50:37 +0000 (0:00:01.657) 0:39:39.665 ******** 2026-04-09 05:50:55.871528 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:50:55.871542 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:50:55.871554 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:50:55.871565 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:50:55.871576 | orchestrator | 2026-04-09 05:50:55.871587 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-09 05:50:55.871599 | orchestrator | Thursday 09 April 2026 05:50:38 +0000 (0:00:01.162) 0:39:40.828 ******** 2026-04-09 05:50:55.871613 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '69d38aa54653', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-09 05:50:32.211603', 'end': '2026-04-09 05:50:32.252676', 'delta': '0:00:00.041073', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['69d38aa54653'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-09 05:50:55.871652 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '3e7867c40460', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-09 05:50:32.750812', 'end': '2026-04-09 05:50:32.795682', 'delta': '0:00:00.044870', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3e7867c40460'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-09 05:50:55.871666 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '5ed6058fb18c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-09 05:50:33.382244', 'end': '2026-04-09 05:50:33.441219', 'delta': '0:00:00.058975', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5ed6058fb18c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-09 05:50:55.871679 | orchestrator | 2026-04-09 05:50:55.871693 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-09 05:50:55.871737 | orchestrator | Thursday 09 April 2026 05:50:40 +0000 (0:00:01.199) 0:39:42.028 ******** 2026-04-09 05:50:55.871751 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:50:55.871764 | orchestrator | 2026-04-09 05:50:55.871777 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-09 05:50:55.871791 | orchestrator | Thursday 09 April 2026 05:50:41 +0000 (0:00:01.253) 0:39:43.281 ******** 2026-04-09 05:50:55.871804 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:50:55.871817 | orchestrator | 2026-04-09 05:50:55.871830 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-09 05:50:55.871843 | orchestrator | Thursday 09 April 2026 05:50:42 +0000 (0:00:01.324) 0:39:44.606 ******** 2026-04-09 05:50:55.871855 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:50:55.871868 | orchestrator | 2026-04-09 05:50:55.871881 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-09 05:50:55.871894 | orchestrator | Thursday 09 April 2026 05:50:43 +0000 (0:00:01.175) 0:39:45.782 ******** 2026-04-09 05:50:55.871907 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-09 05:50:55.871919 | orchestrator | 2026-04-09 05:50:55.871933 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 05:50:55.871946 | orchestrator | Thursday 09 April 2026 05:50:46 +0000 (0:00:02.527) 0:39:48.309 ******** 2026-04-09 05:50:55.871959 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:50:55.871971 | orchestrator | 2026-04-09 05:50:55.871984 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-09 05:50:55.871997 | orchestrator | Thursday 09 April 2026 05:50:47 +0000 (0:00:01.144) 0:39:49.454 ******** 2026-04-09 05:50:55.872010 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:50:55.872023 | orchestrator | 2026-04-09 05:50:55.872036 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-09 05:50:55.872056 | orchestrator | Thursday 09 April 2026 05:50:48 +0000 (0:00:01.130) 0:39:50.585 ******** 2026-04-09 05:50:55.872067 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:50:55.872078 | orchestrator | 2026-04-09 05:50:55.872089 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 05:50:55.872100 | orchestrator | Thursday 09 April 2026 05:50:49 +0000 (0:00:01.224) 0:39:51.810 ******** 2026-04-09 05:50:55.872111 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:50:55.872122 | orchestrator | 2026-04-09 05:50:55.872132 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-09 05:50:55.872143 | orchestrator | Thursday 09 April 2026 05:50:51 +0000 (0:00:01.118) 0:39:52.929 ******** 2026-04-09 05:50:55.872154 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:50:55.872165 | orchestrator | 2026-04-09 05:50:55.872176 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-09 05:50:55.872186 | orchestrator | Thursday 09 April 2026 05:50:52 +0000 (0:00:01.179) 0:39:54.108 ******** 2026-04-09 05:50:55.872197 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:50:55.872208 | orchestrator | 2026-04-09 05:50:55.872219 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-09 05:50:55.872229 | orchestrator | Thursday 09 April 2026 05:50:53 +0000 (0:00:01.156) 0:39:55.265 ******** 2026-04-09 05:50:55.872240 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:50:55.872251 | orchestrator | 2026-04-09 05:50:55.872262 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-09 05:50:55.872273 | orchestrator | Thursday 09 April 2026 05:50:54 +0000 (0:00:01.144) 0:39:56.410 ******** 2026-04-09 05:50:55.872283 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:50:55.872294 | orchestrator | 2026-04-09 05:50:55.872305 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-09 05:50:55.872316 | orchestrator | Thursday 09 April 2026 05:50:55 +0000 (0:00:01.173) 0:39:57.583 ******** 2026-04-09 05:50:55.872327 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:50:55.872338 | orchestrator | 2026-04-09 05:50:55.872355 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-09 05:50:58.259841 | orchestrator | Thursday 09 April 2026 05:50:56 +0000 (0:00:01.151) 0:39:58.734 ******** 2026-04-09 05:50:58.259940 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:50:58.259956 | orchestrator | 2026-04-09 05:50:58.259968 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-09 05:50:58.259980 | orchestrator | Thursday 09 April 2026 05:50:58 +0000 (0:00:01.171) 0:39:59.906 ******** 2026-04-09 05:50:58.259994 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:50:58.260010 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9961abb4--5e3b--57c6--b852--cf206941d3b6-osd--block--9961abb4--5e3b--57c6--b852--cf206941d3b6', 'dm-uuid-LVM-VSFf3ccEgp0wKL5jhz806Y1ipsT5ObrjT6G1Wcak0n2a6HcA1XF9dc3OSaf3QiXy'], 'uuids': ['c5a762f6-19fc-430f-b395-3c5066cc9fcd'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ecc4ee99', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['T6G1Wc-ak0n-2a6H-cA1X-F9dc-3OSa-f3QiXy']}})  2026-04-09 05:50:58.260042 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105', 'scsi-SQEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '60e6f74a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 05:50:58.260077 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-HoNasn-5QxW-KcVM-fZxm-N33I-s5zp-OiEUUv', 'scsi-0QEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74', 'scsi-SQEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91609add', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--68e90870--4763--57e7--8e76--63c40a6d6d6f-osd--block--68e90870--4763--57e7--8e76--63c40a6d6d6f']}})  2026-04-09 05:50:58.260090 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:50:58.260103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:50:58.260133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-09 05:50:58.260147 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:50:58.260159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-KSROHC-dR1c-0EeK-RnCg-2dSk-lYcb-Dl2pDV', 'dm-uuid-CRYPT-LUKS2-952a49d36c2646fe9329a26e5adefe63-KSROHC-dR1c-0EeK-RnCg-2dSk-lYcb-Dl2pDV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-09 05:50:58.260176 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:50:58.260195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--68e90870--4763--57e7--8e76--63c40a6d6d6f-osd--block--68e90870--4763--57e7--8e76--63c40a6d6d6f', 'dm-uuid-LVM-T0ejETzGFepEIpL08MD1OeW49dsnvI7wKSROHCdR1c0EeKRnCg2dSklYcbDl2pDV'], 'uuids': ['952a49d3-6c26-46fe-9329-a26e5adefe63'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '91609add', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['KSROHC-dR1c-0EeK-RnCg-2dSk-lYcb-Dl2pDV']}})  2026-04-09 05:50:58.260207 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Q1Gc3L-FdZu-zuWJ-nsAv-cjJy-ZUop-F84tN5', 'scsi-0QEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf', 'scsi-SQEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ecc4ee99', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9961abb4--5e3b--57c6--b852--cf206941d3b6-osd--block--9961abb4--5e3b--57c6--b852--cf206941d3b6']}})  2026-04-09 05:50:58.260219 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:50:58.260250 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9009f97f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part16', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part14', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part15', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part1', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 05:50:59.649795 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:50:59.649898 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:50:59.649915 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-T6G1Wc-ak0n-2a6H-cA1X-F9dc-3OSa-f3QiXy', 'dm-uuid-CRYPT-LUKS2-c5a762f619fc430fb3953c5066cc9fcd-T6G1Wc-ak0n-2a6H-cA1X-F9dc-3OSa-f3QiXy'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-09 05:50:59.649930 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:50:59.649943 | orchestrator | 2026-04-09 05:50:59.649955 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-09 05:50:59.649967 | orchestrator | Thursday 09 April 2026 05:50:59 +0000 (0:00:01.397) 0:40:01.303 ******** 2026-04-09 05:50:59.649980 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:50:59.649994 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9961abb4--5e3b--57c6--b852--cf206941d3b6-osd--block--9961abb4--5e3b--57c6--b852--cf206941d3b6', 'dm-uuid-LVM-VSFf3ccEgp0wKL5jhz806Y1ipsT5ObrjT6G1Wcak0n2a6HcA1XF9dc3OSaf3QiXy'], 'uuids': ['c5a762f6-19fc-430f-b395-3c5066cc9fcd'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ecc4ee99', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['T6G1Wc-ak0n-2a6H-cA1X-F9dc-3OSa-f3QiXy']}}, 'ansible_loop_var': 'item'})  2026-04-09 05:50:59.650006 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105', 'scsi-SQEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '60e6f74a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:50:59.650133 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-HoNasn-5QxW-KcVM-fZxm-N33I-s5zp-OiEUUv', 'scsi-0QEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74', 'scsi-SQEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91609add', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--68e90870--4763--57e7--8e76--63c40a6d6d6f-osd--block--68e90870--4763--57e7--8e76--63c40a6d6d6f']}}, 'ansible_loop_var': 'item'})  2026-04-09 05:50:59.650185 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:50:59.650198 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:50:59.650211 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:50:59.650223 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:50:59.650256 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-KSROHC-dR1c-0EeK-RnCg-2dSk-lYcb-Dl2pDV', 'dm-uuid-CRYPT-LUKS2-952a49d36c2646fe9329a26e5adefe63-KSROHC-dR1c-0EeK-RnCg-2dSk-lYcb-Dl2pDV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:51:05.144318 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:51:05.144451 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--68e90870--4763--57e7--8e76--63c40a6d6d6f-osd--block--68e90870--4763--57e7--8e76--63c40a6d6d6f', 'dm-uuid-LVM-T0ejETzGFepEIpL08MD1OeW49dsnvI7wKSROHCdR1c0EeKRnCg2dSklYcbDl2pDV'], 'uuids': ['952a49d3-6c26-46fe-9329-a26e5adefe63'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '91609add', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['KSROHC-dR1c-0EeK-RnCg-2dSk-lYcb-Dl2pDV']}}, 'ansible_loop_var': 'item'})  2026-04-09 05:51:05.144470 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Q1Gc3L-FdZu-zuWJ-nsAv-cjJy-ZUop-F84tN5', 'scsi-0QEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf', 'scsi-SQEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ecc4ee99', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9961abb4--5e3b--57c6--b852--cf206941d3b6-osd--block--9961abb4--5e3b--57c6--b852--cf206941d3b6']}}, 'ansible_loop_var': 'item'})  2026-04-09 05:51:05.144486 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:51:05.144557 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9009f97f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part16', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part14', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part15', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part1', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:51:05.144574 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:51:05.144587 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:51:05.144598 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-T6G1Wc-ak0n-2a6H-cA1X-F9dc-3OSa-f3QiXy', 'dm-uuid-CRYPT-LUKS2-c5a762f619fc430fb3953c5066cc9fcd-T6G1Wc-ak0n-2a6H-cA1X-F9dc-3OSa-f3QiXy'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:51:05.144619 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:51:05.144632 | orchestrator | 2026-04-09 05:51:05.144644 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-09 05:51:05.144657 | orchestrator | Thursday 09 April 2026 05:51:00 +0000 (0:00:01.490) 0:40:02.794 ******** 2026-04-09 05:51:05.144668 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:51:05.144679 | orchestrator | 2026-04-09 05:51:05.144690 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-09 05:51:05.144806 | orchestrator | Thursday 09 April 2026 05:51:02 +0000 (0:00:01.596) 0:40:04.390 ******** 2026-04-09 05:51:05.144820 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:51:05.144834 | orchestrator | 2026-04-09 05:51:05.144848 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 05:51:05.144868 | orchestrator | Thursday 09 April 2026 05:51:03 +0000 (0:00:01.111) 0:40:05.502 ******** 2026-04-09 05:51:05.144881 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:51:05.144894 | orchestrator | 2026-04-09 05:51:05.144908 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 05:51:05.144931 | orchestrator | Thursday 09 April 2026 05:51:05 +0000 (0:00:01.507) 0:40:07.009 ******** 2026-04-09 05:51:45.867591 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:51:45.867787 | orchestrator | 2026-04-09 05:51:45.867808 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 05:51:45.867822 | orchestrator | Thursday 09 April 2026 05:51:06 +0000 (0:00:01.133) 0:40:08.142 ******** 2026-04-09 05:51:45.867834 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:51:45.867845 | orchestrator | 2026-04-09 05:51:45.867857 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 05:51:45.867868 | orchestrator | Thursday 09 April 2026 05:51:07 +0000 (0:00:01.236) 0:40:09.379 ******** 2026-04-09 05:51:45.867880 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:51:45.867891 | orchestrator | 2026-04-09 05:51:45.867901 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 05:51:45.867913 | orchestrator | Thursday 09 April 2026 05:51:08 +0000 (0:00:01.241) 0:40:10.620 ******** 2026-04-09 05:51:45.867924 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-09 05:51:45.867936 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-09 05:51:45.867947 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-09 05:51:45.867958 | orchestrator | 2026-04-09 05:51:45.867969 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 05:51:45.867980 | orchestrator | Thursday 09 April 2026 05:51:10 +0000 (0:00:01.714) 0:40:12.335 ******** 2026-04-09 05:51:45.867991 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-09 05:51:45.868002 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-09 05:51:45.868013 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-09 05:51:45.868024 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:51:45.868035 | orchestrator | 2026-04-09 05:51:45.868046 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-09 05:51:45.868057 | orchestrator | Thursday 09 April 2026 05:51:11 +0000 (0:00:01.159) 0:40:13.494 ******** 2026-04-09 05:51:45.868068 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-04-09 05:51:45.868080 | orchestrator | 2026-04-09 05:51:45.868091 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 05:51:45.868105 | orchestrator | Thursday 09 April 2026 05:51:12 +0000 (0:00:01.198) 0:40:14.693 ******** 2026-04-09 05:51:45.868142 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:51:45.868168 | orchestrator | 2026-04-09 05:51:45.868181 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 05:51:45.868194 | orchestrator | Thursday 09 April 2026 05:51:13 +0000 (0:00:01.168) 0:40:15.861 ******** 2026-04-09 05:51:45.868207 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:51:45.868220 | orchestrator | 2026-04-09 05:51:45.868232 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 05:51:45.868243 | orchestrator | Thursday 09 April 2026 05:51:15 +0000 (0:00:01.150) 0:40:17.011 ******** 2026-04-09 05:51:45.868254 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:51:45.868265 | orchestrator | 2026-04-09 05:51:45.868276 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 05:51:45.868287 | orchestrator | Thursday 09 April 2026 05:51:16 +0000 (0:00:01.179) 0:40:18.191 ******** 2026-04-09 05:51:45.868298 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:51:45.868309 | orchestrator | 2026-04-09 05:51:45.868320 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 05:51:45.868331 | orchestrator | Thursday 09 April 2026 05:51:17 +0000 (0:00:01.302) 0:40:19.493 ******** 2026-04-09 05:51:45.868342 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-09 05:51:45.868361 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-09 05:51:45.868379 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-09 05:51:45.868398 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:51:45.868417 | orchestrator | 2026-04-09 05:51:45.868437 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 05:51:45.868451 | orchestrator | Thursday 09 April 2026 05:51:19 +0000 (0:00:01.401) 0:40:20.895 ******** 2026-04-09 05:51:45.868462 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-09 05:51:45.868473 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-09 05:51:45.868484 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-09 05:51:45.868495 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:51:45.868505 | orchestrator | 2026-04-09 05:51:45.868523 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 05:51:45.868541 | orchestrator | Thursday 09 April 2026 05:51:20 +0000 (0:00:01.447) 0:40:22.343 ******** 2026-04-09 05:51:45.868558 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-09 05:51:45.868573 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-09 05:51:45.868583 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-09 05:51:45.868594 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:51:45.868605 | orchestrator | 2026-04-09 05:51:45.868616 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 05:51:45.868627 | orchestrator | Thursday 09 April 2026 05:51:21 +0000 (0:00:01.430) 0:40:23.774 ******** 2026-04-09 05:51:45.868639 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:51:45.868650 | orchestrator | 2026-04-09 05:51:45.868661 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 05:51:45.868672 | orchestrator | Thursday 09 April 2026 05:51:23 +0000 (0:00:01.150) 0:40:24.925 ******** 2026-04-09 05:51:45.868725 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-09 05:51:45.868740 | orchestrator | 2026-04-09 05:51:45.868751 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-09 05:51:45.868762 | orchestrator | Thursday 09 April 2026 05:51:24 +0000 (0:00:01.357) 0:40:26.282 ******** 2026-04-09 05:51:45.868793 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:51:45.868805 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:51:45.868816 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:51:45.868826 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 05:51:45.868848 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-09 05:51:45.868859 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 05:51:45.868869 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 05:51:45.868880 | orchestrator | 2026-04-09 05:51:45.868891 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-09 05:51:45.868903 | orchestrator | Thursday 09 April 2026 05:51:26 +0000 (0:00:01.782) 0:40:28.065 ******** 2026-04-09 05:51:45.868914 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:51:45.868924 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:51:45.868935 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:51:45.868946 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 05:51:45.868957 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-09 05:51:45.868968 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 05:51:45.868979 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 05:51:45.868990 | orchestrator | 2026-04-09 05:51:45.869001 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-04-09 05:51:45.869011 | orchestrator | Thursday 09 April 2026 05:51:28 +0000 (0:00:02.324) 0:40:30.390 ******** 2026-04-09 05:51:45.869022 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:51:45.869033 | orchestrator | 2026-04-09 05:51:45.869044 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-04-09 05:51:45.869055 | orchestrator | Thursday 09 April 2026 05:51:29 +0000 (0:00:01.124) 0:40:31.515 ******** 2026-04-09 05:51:45.869066 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:51:45.869077 | orchestrator | 2026-04-09 05:51:45.869088 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-04-09 05:51:45.869098 | orchestrator | Thursday 09 April 2026 05:51:30 +0000 (0:00:00.780) 0:40:32.295 ******** 2026-04-09 05:51:45.869109 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:51:45.869120 | orchestrator | 2026-04-09 05:51:45.869131 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-04-09 05:51:45.869142 | orchestrator | Thursday 09 April 2026 05:51:31 +0000 (0:00:00.891) 0:40:33.187 ******** 2026-04-09 05:51:45.869153 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-04-09 05:51:45.869168 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-09 05:51:45.869186 | orchestrator | 2026-04-09 05:51:45.869205 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 05:51:45.869224 | orchestrator | Thursday 09 April 2026 05:51:35 +0000 (0:00:03.876) 0:40:37.063 ******** 2026-04-09 05:51:45.869240 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-04-09 05:51:45.869256 | orchestrator | 2026-04-09 05:51:45.869273 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 05:51:45.869288 | orchestrator | Thursday 09 April 2026 05:51:36 +0000 (0:00:01.323) 0:40:38.387 ******** 2026-04-09 05:51:45.869304 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-04-09 05:51:45.869321 | orchestrator | 2026-04-09 05:51:45.869338 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 05:51:45.869355 | orchestrator | Thursday 09 April 2026 05:51:37 +0000 (0:00:01.166) 0:40:39.553 ******** 2026-04-09 05:51:45.869371 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:51:45.869388 | orchestrator | 2026-04-09 05:51:45.869404 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 05:51:45.869422 | orchestrator | Thursday 09 April 2026 05:51:38 +0000 (0:00:01.138) 0:40:40.692 ******** 2026-04-09 05:51:45.869451 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:51:45.869469 | orchestrator | 2026-04-09 05:51:45.869486 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 05:51:45.869502 | orchestrator | Thursday 09 April 2026 05:51:40 +0000 (0:00:01.488) 0:40:42.181 ******** 2026-04-09 05:51:45.869518 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:51:45.869535 | orchestrator | 2026-04-09 05:51:45.869552 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 05:51:45.869569 | orchestrator | Thursday 09 April 2026 05:51:41 +0000 (0:00:01.514) 0:40:43.695 ******** 2026-04-09 05:51:45.869585 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:51:45.869602 | orchestrator | 2026-04-09 05:51:45.869620 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 05:51:45.869639 | orchestrator | Thursday 09 April 2026 05:51:43 +0000 (0:00:01.541) 0:40:45.236 ******** 2026-04-09 05:51:45.869657 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:51:45.869674 | orchestrator | 2026-04-09 05:51:45.869693 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 05:51:45.869748 | orchestrator | Thursday 09 April 2026 05:51:44 +0000 (0:00:01.151) 0:40:46.388 ******** 2026-04-09 05:51:45.869759 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:51:45.869770 | orchestrator | 2026-04-09 05:51:45.869790 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 05:51:45.869802 | orchestrator | Thursday 09 April 2026 05:51:45 +0000 (0:00:01.205) 0:40:47.593 ******** 2026-04-09 05:51:45.869812 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:51:45.869823 | orchestrator | 2026-04-09 05:51:45.869848 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 05:52:26.077044 | orchestrator | Thursday 09 April 2026 05:51:46 +0000 (0:00:01.124) 0:40:48.718 ******** 2026-04-09 05:52:26.077187 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:52:26.077213 | orchestrator | 2026-04-09 05:52:26.077233 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 05:52:26.077251 | orchestrator | Thursday 09 April 2026 05:51:48 +0000 (0:00:01.523) 0:40:50.242 ******** 2026-04-09 05:52:26.077268 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:52:26.077285 | orchestrator | 2026-04-09 05:52:26.077303 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 05:52:26.077320 | orchestrator | Thursday 09 April 2026 05:51:49 +0000 (0:00:01.549) 0:40:51.792 ******** 2026-04-09 05:52:26.077337 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:52:26.077354 | orchestrator | 2026-04-09 05:52:26.077371 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 05:52:26.077386 | orchestrator | Thursday 09 April 2026 05:51:50 +0000 (0:00:00.774) 0:40:52.566 ******** 2026-04-09 05:52:26.077402 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:52:26.077418 | orchestrator | 2026-04-09 05:52:26.077434 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 05:52:26.077450 | orchestrator | Thursday 09 April 2026 05:51:51 +0000 (0:00:00.813) 0:40:53.380 ******** 2026-04-09 05:52:26.077467 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:52:26.077484 | orchestrator | 2026-04-09 05:52:26.077500 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 05:52:26.077516 | orchestrator | Thursday 09 April 2026 05:51:52 +0000 (0:00:00.790) 0:40:54.171 ******** 2026-04-09 05:52:26.077534 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:52:26.077551 | orchestrator | 2026-04-09 05:52:26.077570 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 05:52:26.077590 | orchestrator | Thursday 09 April 2026 05:51:53 +0000 (0:00:00.796) 0:40:54.968 ******** 2026-04-09 05:52:26.077609 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:52:26.077627 | orchestrator | 2026-04-09 05:52:26.077646 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 05:52:26.077664 | orchestrator | Thursday 09 April 2026 05:51:53 +0000 (0:00:00.808) 0:40:55.776 ******** 2026-04-09 05:52:26.077744 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:52:26.077765 | orchestrator | 2026-04-09 05:52:26.077782 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 05:52:26.077800 | orchestrator | Thursday 09 April 2026 05:51:54 +0000 (0:00:00.815) 0:40:56.592 ******** 2026-04-09 05:52:26.077817 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:52:26.077836 | orchestrator | 2026-04-09 05:52:26.077857 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 05:52:26.077877 | orchestrator | Thursday 09 April 2026 05:51:55 +0000 (0:00:00.830) 0:40:57.422 ******** 2026-04-09 05:52:26.077896 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:52:26.077914 | orchestrator | 2026-04-09 05:52:26.077932 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 05:52:26.077949 | orchestrator | Thursday 09 April 2026 05:51:56 +0000 (0:00:00.806) 0:40:58.228 ******** 2026-04-09 05:52:26.077965 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:52:26.077981 | orchestrator | 2026-04-09 05:52:26.077998 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 05:52:26.078090 | orchestrator | Thursday 09 April 2026 05:51:57 +0000 (0:00:00.788) 0:40:59.017 ******** 2026-04-09 05:52:26.078112 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:52:26.078141 | orchestrator | 2026-04-09 05:52:26.078158 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-09 05:52:26.078174 | orchestrator | Thursday 09 April 2026 05:51:57 +0000 (0:00:00.791) 0:40:59.809 ******** 2026-04-09 05:52:26.078192 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:52:26.078209 | orchestrator | 2026-04-09 05:52:26.078227 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-09 05:52:26.078245 | orchestrator | Thursday 09 April 2026 05:51:58 +0000 (0:00:00.762) 0:41:00.572 ******** 2026-04-09 05:52:26.078261 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:52:26.078278 | orchestrator | 2026-04-09 05:52:26.078293 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-09 05:52:26.078307 | orchestrator | Thursday 09 April 2026 05:51:59 +0000 (0:00:00.770) 0:41:01.342 ******** 2026-04-09 05:52:26.078321 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:52:26.078334 | orchestrator | 2026-04-09 05:52:26.078348 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-09 05:52:26.078361 | orchestrator | Thursday 09 April 2026 05:52:00 +0000 (0:00:00.769) 0:41:02.112 ******** 2026-04-09 05:52:26.078373 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:52:26.078386 | orchestrator | 2026-04-09 05:52:26.078399 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-09 05:52:26.078412 | orchestrator | Thursday 09 April 2026 05:52:01 +0000 (0:00:00.860) 0:41:02.972 ******** 2026-04-09 05:52:26.078424 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:52:26.078436 | orchestrator | 2026-04-09 05:52:26.078449 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-09 05:52:26.078462 | orchestrator | Thursday 09 April 2026 05:52:01 +0000 (0:00:00.798) 0:41:03.771 ******** 2026-04-09 05:52:26.078475 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:52:26.078488 | orchestrator | 2026-04-09 05:52:26.078501 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-09 05:52:26.078515 | orchestrator | Thursday 09 April 2026 05:52:02 +0000 (0:00:00.792) 0:41:04.563 ******** 2026-04-09 05:52:26.078529 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:52:26.078542 | orchestrator | 2026-04-09 05:52:26.078556 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-09 05:52:26.078571 | orchestrator | Thursday 09 April 2026 05:52:03 +0000 (0:00:00.792) 0:41:05.355 ******** 2026-04-09 05:52:26.078584 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:52:26.078597 | orchestrator | 2026-04-09 05:52:26.078611 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-09 05:52:26.078624 | orchestrator | Thursday 09 April 2026 05:52:04 +0000 (0:00:00.768) 0:41:06.124 ******** 2026-04-09 05:52:26.078678 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:52:26.078693 | orchestrator | 2026-04-09 05:52:26.078729 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-09 05:52:26.078743 | orchestrator | Thursday 09 April 2026 05:52:05 +0000 (0:00:00.833) 0:41:06.958 ******** 2026-04-09 05:52:26.078755 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:52:26.078767 | orchestrator | 2026-04-09 05:52:26.078781 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-09 05:52:26.078793 | orchestrator | Thursday 09 April 2026 05:52:05 +0000 (0:00:00.793) 0:41:07.751 ******** 2026-04-09 05:52:26.078806 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:52:26.078820 | orchestrator | 2026-04-09 05:52:26.078834 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-09 05:52:26.078847 | orchestrator | Thursday 09 April 2026 05:52:06 +0000 (0:00:00.772) 0:41:08.524 ******** 2026-04-09 05:52:26.078860 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:52:26.078873 | orchestrator | 2026-04-09 05:52:26.078888 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-09 05:52:26.078901 | orchestrator | Thursday 09 April 2026 05:52:07 +0000 (0:00:00.786) 0:41:09.311 ******** 2026-04-09 05:52:26.078915 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:52:26.078929 | orchestrator | 2026-04-09 05:52:26.078943 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-09 05:52:26.078957 | orchestrator | Thursday 09 April 2026 05:52:08 +0000 (0:00:01.550) 0:41:10.862 ******** 2026-04-09 05:52:26.078970 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:52:26.078983 | orchestrator | 2026-04-09 05:52:26.078996 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-09 05:52:26.079009 | orchestrator | Thursday 09 April 2026 05:52:10 +0000 (0:00:01.980) 0:41:12.842 ******** 2026-04-09 05:52:26.079021 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-04-09 05:52:26.079036 | orchestrator | 2026-04-09 05:52:26.079049 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-09 05:52:26.079062 | orchestrator | Thursday 09 April 2026 05:52:12 +0000 (0:00:01.342) 0:41:14.184 ******** 2026-04-09 05:52:26.079075 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:52:26.079088 | orchestrator | 2026-04-09 05:52:26.079100 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-09 05:52:26.079114 | orchestrator | Thursday 09 April 2026 05:52:13 +0000 (0:00:01.213) 0:41:15.398 ******** 2026-04-09 05:52:26.079127 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:52:26.079138 | orchestrator | 2026-04-09 05:52:26.079196 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-09 05:52:26.079212 | orchestrator | Thursday 09 April 2026 05:52:14 +0000 (0:00:01.214) 0:41:16.612 ******** 2026-04-09 05:52:26.079226 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 05:52:26.079239 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 05:52:26.079252 | orchestrator | 2026-04-09 05:52:26.079265 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-09 05:52:26.079297 | orchestrator | Thursday 09 April 2026 05:52:16 +0000 (0:00:01.776) 0:41:18.389 ******** 2026-04-09 05:52:26.079310 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:52:26.079335 | orchestrator | 2026-04-09 05:52:26.079347 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-09 05:52:26.079360 | orchestrator | Thursday 09 April 2026 05:52:17 +0000 (0:00:01.467) 0:41:19.856 ******** 2026-04-09 05:52:26.079371 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:52:26.079384 | orchestrator | 2026-04-09 05:52:26.079396 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-09 05:52:26.079409 | orchestrator | Thursday 09 April 2026 05:52:19 +0000 (0:00:01.136) 0:41:20.994 ******** 2026-04-09 05:52:26.079422 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:52:26.079448 | orchestrator | 2026-04-09 05:52:26.079461 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-09 05:52:26.079473 | orchestrator | Thursday 09 April 2026 05:52:19 +0000 (0:00:00.788) 0:41:21.782 ******** 2026-04-09 05:52:26.079486 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:52:26.079498 | orchestrator | 2026-04-09 05:52:26.079511 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-09 05:52:26.079524 | orchestrator | Thursday 09 April 2026 05:52:20 +0000 (0:00:00.792) 0:41:22.575 ******** 2026-04-09 05:52:26.079537 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-04-09 05:52:26.079550 | orchestrator | 2026-04-09 05:52:26.079563 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-09 05:52:26.079576 | orchestrator | Thursday 09 April 2026 05:52:21 +0000 (0:00:01.132) 0:41:23.707 ******** 2026-04-09 05:52:26.079589 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:52:26.079601 | orchestrator | 2026-04-09 05:52:26.079614 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-09 05:52:26.079627 | orchestrator | Thursday 09 April 2026 05:52:23 +0000 (0:00:01.861) 0:41:25.569 ******** 2026-04-09 05:52:26.079640 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 05:52:26.079652 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 05:52:26.079665 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 05:52:26.079678 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:52:26.079690 | orchestrator | 2026-04-09 05:52:26.079762 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-09 05:52:26.079786 | orchestrator | Thursday 09 April 2026 05:52:24 +0000 (0:00:01.147) 0:41:26.717 ******** 2026-04-09 05:52:26.079800 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:52:26.079815 | orchestrator | 2026-04-09 05:52:26.079829 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-09 05:52:26.079844 | orchestrator | Thursday 09 April 2026 05:52:25 +0000 (0:00:01.129) 0:41:27.847 ******** 2026-04-09 05:52:26.079878 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:09.328257 | orchestrator | 2026-04-09 05:53:09.328380 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-09 05:53:09.328398 | orchestrator | Thursday 09 April 2026 05:52:27 +0000 (0:00:01.205) 0:41:29.052 ******** 2026-04-09 05:53:09.328410 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:09.328423 | orchestrator | 2026-04-09 05:53:09.328435 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-09 05:53:09.328446 | orchestrator | Thursday 09 April 2026 05:52:28 +0000 (0:00:01.198) 0:41:30.251 ******** 2026-04-09 05:53:09.328457 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:09.328469 | orchestrator | 2026-04-09 05:53:09.328480 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-09 05:53:09.328491 | orchestrator | Thursday 09 April 2026 05:52:29 +0000 (0:00:01.154) 0:41:31.406 ******** 2026-04-09 05:53:09.328502 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:09.328513 | orchestrator | 2026-04-09 05:53:09.328524 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-09 05:53:09.328535 | orchestrator | Thursday 09 April 2026 05:52:30 +0000 (0:00:00.769) 0:41:32.175 ******** 2026-04-09 05:53:09.328546 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:53:09.328558 | orchestrator | 2026-04-09 05:53:09.328569 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-09 05:53:09.328581 | orchestrator | Thursday 09 April 2026 05:52:32 +0000 (0:00:02.248) 0:41:34.424 ******** 2026-04-09 05:53:09.328592 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:53:09.328603 | orchestrator | 2026-04-09 05:53:09.328614 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-09 05:53:09.328625 | orchestrator | Thursday 09 April 2026 05:52:33 +0000 (0:00:00.777) 0:41:35.201 ******** 2026-04-09 05:53:09.328660 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-04-09 05:53:09.328672 | orchestrator | 2026-04-09 05:53:09.328684 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-09 05:53:09.328721 | orchestrator | Thursday 09 April 2026 05:52:34 +0000 (0:00:01.123) 0:41:36.325 ******** 2026-04-09 05:53:09.328734 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:09.328745 | orchestrator | 2026-04-09 05:53:09.328756 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-09 05:53:09.328767 | orchestrator | Thursday 09 April 2026 05:52:35 +0000 (0:00:01.165) 0:41:37.491 ******** 2026-04-09 05:53:09.328781 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:09.328796 | orchestrator | 2026-04-09 05:53:09.328809 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-09 05:53:09.328823 | orchestrator | Thursday 09 April 2026 05:52:36 +0000 (0:00:01.146) 0:41:38.638 ******** 2026-04-09 05:53:09.328836 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:09.328850 | orchestrator | 2026-04-09 05:53:09.328862 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-09 05:53:09.328876 | orchestrator | Thursday 09 April 2026 05:52:37 +0000 (0:00:01.180) 0:41:39.818 ******** 2026-04-09 05:53:09.328889 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:09.328902 | orchestrator | 2026-04-09 05:53:09.328915 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-09 05:53:09.328928 | orchestrator | Thursday 09 April 2026 05:52:39 +0000 (0:00:01.185) 0:41:41.004 ******** 2026-04-09 05:53:09.328943 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:09.328956 | orchestrator | 2026-04-09 05:53:09.328970 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-09 05:53:09.328983 | orchestrator | Thursday 09 April 2026 05:52:40 +0000 (0:00:01.164) 0:41:42.169 ******** 2026-04-09 05:53:09.328994 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:09.329005 | orchestrator | 2026-04-09 05:53:09.329016 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-09 05:53:09.329027 | orchestrator | Thursday 09 April 2026 05:52:41 +0000 (0:00:01.143) 0:41:43.312 ******** 2026-04-09 05:53:09.329038 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:09.329049 | orchestrator | 2026-04-09 05:53:09.329060 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-09 05:53:09.329071 | orchestrator | Thursday 09 April 2026 05:52:42 +0000 (0:00:01.247) 0:41:44.560 ******** 2026-04-09 05:53:09.329082 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:09.329093 | orchestrator | 2026-04-09 05:53:09.329104 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-09 05:53:09.329115 | orchestrator | Thursday 09 April 2026 05:52:43 +0000 (0:00:01.179) 0:41:45.740 ******** 2026-04-09 05:53:09.329125 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:53:09.329136 | orchestrator | 2026-04-09 05:53:09.329147 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-09 05:53:09.329158 | orchestrator | Thursday 09 April 2026 05:52:44 +0000 (0:00:00.857) 0:41:46.597 ******** 2026-04-09 05:53:09.329169 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-04-09 05:53:09.329182 | orchestrator | 2026-04-09 05:53:09.329193 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-09 05:53:09.329204 | orchestrator | Thursday 09 April 2026 05:52:45 +0000 (0:00:01.158) 0:41:47.755 ******** 2026-04-09 05:53:09.329215 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-04-09 05:53:09.329226 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-09 05:53:09.329237 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-09 05:53:09.329248 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-09 05:53:09.329259 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-09 05:53:09.329286 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-09 05:53:09.329306 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-09 05:53:09.329317 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-09 05:53:09.329329 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 05:53:09.329359 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 05:53:09.329371 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 05:53:09.329382 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 05:53:09.329393 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 05:53:09.329404 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 05:53:09.329415 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-04-09 05:53:09.329426 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-04-09 05:53:09.329437 | orchestrator | 2026-04-09 05:53:09.329448 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-09 05:53:09.329459 | orchestrator | Thursday 09 April 2026 05:52:52 +0000 (0:00:06.289) 0:41:54.045 ******** 2026-04-09 05:53:09.329470 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-04-09 05:53:09.329481 | orchestrator | 2026-04-09 05:53:09.329492 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-09 05:53:09.329503 | orchestrator | Thursday 09 April 2026 05:52:53 +0000 (0:00:01.129) 0:41:55.175 ******** 2026-04-09 05:53:09.329514 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 05:53:09.329527 | orchestrator | 2026-04-09 05:53:09.329538 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-09 05:53:09.329549 | orchestrator | Thursday 09 April 2026 05:52:54 +0000 (0:00:01.506) 0:41:56.681 ******** 2026-04-09 05:53:09.329560 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 05:53:09.329571 | orchestrator | 2026-04-09 05:53:09.329582 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-09 05:53:09.329592 | orchestrator | Thursday 09 April 2026 05:52:56 +0000 (0:00:01.623) 0:41:58.304 ******** 2026-04-09 05:53:09.329603 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:09.329614 | orchestrator | 2026-04-09 05:53:09.329625 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-09 05:53:09.329636 | orchestrator | Thursday 09 April 2026 05:52:57 +0000 (0:00:00.789) 0:41:59.094 ******** 2026-04-09 05:53:09.329647 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:09.329658 | orchestrator | 2026-04-09 05:53:09.329669 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-09 05:53:09.329679 | orchestrator | Thursday 09 April 2026 05:52:58 +0000 (0:00:00.796) 0:41:59.891 ******** 2026-04-09 05:53:09.329690 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:09.329745 | orchestrator | 2026-04-09 05:53:09.329756 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-09 05:53:09.329767 | orchestrator | Thursday 09 April 2026 05:52:58 +0000 (0:00:00.795) 0:42:00.687 ******** 2026-04-09 05:53:09.329778 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:09.329789 | orchestrator | 2026-04-09 05:53:09.329800 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-09 05:53:09.329811 | orchestrator | Thursday 09 April 2026 05:52:59 +0000 (0:00:00.810) 0:42:01.497 ******** 2026-04-09 05:53:09.329822 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:09.329833 | orchestrator | 2026-04-09 05:53:09.329844 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-09 05:53:09.329856 | orchestrator | Thursday 09 April 2026 05:53:00 +0000 (0:00:00.795) 0:42:02.292 ******** 2026-04-09 05:53:09.329874 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:09.329885 | orchestrator | 2026-04-09 05:53:09.329896 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-09 05:53:09.329907 | orchestrator | Thursday 09 April 2026 05:53:01 +0000 (0:00:00.769) 0:42:03.062 ******** 2026-04-09 05:53:09.329918 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:09.329929 | orchestrator | 2026-04-09 05:53:09.329941 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-09 05:53:09.329952 | orchestrator | Thursday 09 April 2026 05:53:01 +0000 (0:00:00.766) 0:42:03.828 ******** 2026-04-09 05:53:09.329963 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:09.329974 | orchestrator | 2026-04-09 05:53:09.329985 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-09 05:53:09.329996 | orchestrator | Thursday 09 April 2026 05:53:02 +0000 (0:00:00.787) 0:42:04.616 ******** 2026-04-09 05:53:09.330007 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:09.330079 | orchestrator | 2026-04-09 05:53:09.330091 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-09 05:53:09.330103 | orchestrator | Thursday 09 April 2026 05:53:03 +0000 (0:00:00.768) 0:42:05.385 ******** 2026-04-09 05:53:09.330114 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:09.330125 | orchestrator | 2026-04-09 05:53:09.330136 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-09 05:53:09.330147 | orchestrator | Thursday 09 April 2026 05:53:04 +0000 (0:00:00.805) 0:42:06.190 ******** 2026-04-09 05:53:09.330158 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:53:09.330169 | orchestrator | 2026-04-09 05:53:09.330181 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-09 05:53:09.330192 | orchestrator | Thursday 09 April 2026 05:53:05 +0000 (0:00:00.862) 0:42:07.052 ******** 2026-04-09 05:53:09.330209 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-04-09 05:53:09.330220 | orchestrator | 2026-04-09 05:53:09.330232 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-09 05:53:09.330257 | orchestrator | Thursday 09 April 2026 05:53:09 +0000 (0:00:04.029) 0:42:11.082 ******** 2026-04-09 05:53:09.330289 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 05:53:49.883118 | orchestrator | 2026-04-09 05:53:49.883232 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-09 05:53:49.883246 | orchestrator | Thursday 09 April 2026 05:53:10 +0000 (0:00:00.848) 0:42:11.930 ******** 2026-04-09 05:53:49.883259 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-04-09 05:53:49.883272 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-04-09 05:53:49.883284 | orchestrator | 2026-04-09 05:53:49.883294 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-09 05:53:49.883304 | orchestrator | Thursday 09 April 2026 05:53:17 +0000 (0:00:07.117) 0:42:19.048 ******** 2026-04-09 05:53:49.883314 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:49.883324 | orchestrator | 2026-04-09 05:53:49.883334 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-09 05:53:49.883344 | orchestrator | Thursday 09 April 2026 05:53:17 +0000 (0:00:00.812) 0:42:19.860 ******** 2026-04-09 05:53:49.883354 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:49.883387 | orchestrator | 2026-04-09 05:53:49.883398 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 05:53:49.883409 | orchestrator | Thursday 09 April 2026 05:53:18 +0000 (0:00:00.837) 0:42:20.697 ******** 2026-04-09 05:53:49.883419 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:49.883428 | orchestrator | 2026-04-09 05:53:49.883438 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 05:53:49.883447 | orchestrator | Thursday 09 April 2026 05:53:19 +0000 (0:00:00.791) 0:42:21.489 ******** 2026-04-09 05:53:49.883457 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:49.883467 | orchestrator | 2026-04-09 05:53:49.883476 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 05:53:49.883486 | orchestrator | Thursday 09 April 2026 05:53:20 +0000 (0:00:00.814) 0:42:22.304 ******** 2026-04-09 05:53:49.883495 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:49.883505 | orchestrator | 2026-04-09 05:53:49.883514 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 05:53:49.883524 | orchestrator | Thursday 09 April 2026 05:53:21 +0000 (0:00:00.788) 0:42:23.092 ******** 2026-04-09 05:53:49.883533 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:53:49.883543 | orchestrator | 2026-04-09 05:53:49.883553 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 05:53:49.883563 | orchestrator | Thursday 09 April 2026 05:53:22 +0000 (0:00:00.880) 0:42:23.972 ******** 2026-04-09 05:53:49.883572 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-09 05:53:49.883583 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-09 05:53:49.883592 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-09 05:53:49.883602 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:49.883611 | orchestrator | 2026-04-09 05:53:49.883621 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 05:53:49.883630 | orchestrator | Thursday 09 April 2026 05:53:23 +0000 (0:00:01.043) 0:42:25.016 ******** 2026-04-09 05:53:49.883640 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-09 05:53:49.883650 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-09 05:53:49.883662 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-09 05:53:49.883673 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:49.883684 | orchestrator | 2026-04-09 05:53:49.883721 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 05:53:49.883733 | orchestrator | Thursday 09 April 2026 05:53:24 +0000 (0:00:01.103) 0:42:26.120 ******** 2026-04-09 05:53:49.883744 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-09 05:53:49.883755 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-09 05:53:49.883767 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-09 05:53:49.883778 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:49.883789 | orchestrator | 2026-04-09 05:53:49.883801 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 05:53:49.883812 | orchestrator | Thursday 09 April 2026 05:53:25 +0000 (0:00:01.096) 0:42:27.217 ******** 2026-04-09 05:53:49.883824 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:53:49.883836 | orchestrator | 2026-04-09 05:53:49.883848 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 05:53:49.883860 | orchestrator | Thursday 09 April 2026 05:53:26 +0000 (0:00:00.781) 0:42:27.999 ******** 2026-04-09 05:53:49.883871 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-09 05:53:49.883882 | orchestrator | 2026-04-09 05:53:49.883893 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-09 05:53:49.883919 | orchestrator | Thursday 09 April 2026 05:53:27 +0000 (0:00:00.983) 0:42:28.983 ******** 2026-04-09 05:53:49.883931 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:53:49.883942 | orchestrator | 2026-04-09 05:53:49.883958 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-09 05:53:49.883968 | orchestrator | Thursday 09 April 2026 05:53:28 +0000 (0:00:01.408) 0:42:30.391 ******** 2026-04-09 05:53:49.883978 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:53:49.883988 | orchestrator | 2026-04-09 05:53:49.884013 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-09 05:53:49.884023 | orchestrator | Thursday 09 April 2026 05:53:29 +0000 (0:00:00.846) 0:42:31.238 ******** 2026-04-09 05:53:49.884033 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:53:49.884043 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:53:49.884053 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:53:49.884063 | orchestrator | 2026-04-09 05:53:49.884072 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-09 05:53:49.884081 | orchestrator | Thursday 09 April 2026 05:53:30 +0000 (0:00:01.311) 0:42:32.550 ******** 2026-04-09 05:53:49.884091 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-4 2026-04-09 05:53:49.884101 | orchestrator | 2026-04-09 05:53:49.884110 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-09 05:53:49.884120 | orchestrator | Thursday 09 April 2026 05:53:31 +0000 (0:00:01.111) 0:42:33.662 ******** 2026-04-09 05:53:49.884130 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:49.884139 | orchestrator | 2026-04-09 05:53:49.884149 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-09 05:53:49.884158 | orchestrator | Thursday 09 April 2026 05:53:32 +0000 (0:00:01.129) 0:42:34.791 ******** 2026-04-09 05:53:49.884168 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:49.884178 | orchestrator | 2026-04-09 05:53:49.884187 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-09 05:53:49.884197 | orchestrator | Thursday 09 April 2026 05:53:34 +0000 (0:00:01.104) 0:42:35.896 ******** 2026-04-09 05:53:49.884207 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:53:49.884216 | orchestrator | 2026-04-09 05:53:49.884226 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-09 05:53:49.884236 | orchestrator | Thursday 09 April 2026 05:53:35 +0000 (0:00:01.422) 0:42:37.318 ******** 2026-04-09 05:53:49.884245 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:53:49.884255 | orchestrator | 2026-04-09 05:53:49.884264 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-09 05:53:49.884274 | orchestrator | Thursday 09 April 2026 05:53:36 +0000 (0:00:01.165) 0:42:38.483 ******** 2026-04-09 05:53:49.884284 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-09 05:53:49.884294 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-09 05:53:49.884303 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-09 05:53:49.884313 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-09 05:53:49.884323 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-09 05:53:49.884332 | orchestrator | 2026-04-09 05:53:49.884342 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-09 05:53:49.884351 | orchestrator | Thursday 09 April 2026 05:53:39 +0000 (0:00:02.463) 0:42:40.947 ******** 2026-04-09 05:53:49.884361 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:49.884371 | orchestrator | 2026-04-09 05:53:49.884380 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-09 05:53:49.884390 | orchestrator | Thursday 09 April 2026 05:53:39 +0000 (0:00:00.763) 0:42:41.711 ******** 2026-04-09 05:53:49.884400 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-4 2026-04-09 05:53:49.884409 | orchestrator | 2026-04-09 05:53:49.884419 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-09 05:53:49.884434 | orchestrator | Thursday 09 April 2026 05:53:41 +0000 (0:00:01.163) 0:42:42.875 ******** 2026-04-09 05:53:49.884444 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-09 05:53:49.884454 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-09 05:53:49.884463 | orchestrator | 2026-04-09 05:53:49.884473 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-09 05:53:49.884482 | orchestrator | Thursday 09 April 2026 05:53:42 +0000 (0:00:01.845) 0:42:44.721 ******** 2026-04-09 05:53:49.884492 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 05:53:49.884501 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-09 05:53:49.884511 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 05:53:49.884521 | orchestrator | 2026-04-09 05:53:49.884530 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-09 05:53:49.884540 | orchestrator | Thursday 09 April 2026 05:53:46 +0000 (0:00:03.582) 0:42:48.304 ******** 2026-04-09 05:53:49.884549 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-09 05:53:49.884559 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-09 05:53:49.884569 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:53:49.884578 | orchestrator | 2026-04-09 05:53:49.884588 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-09 05:53:49.884597 | orchestrator | Thursday 09 April 2026 05:53:48 +0000 (0:00:01.637) 0:42:49.941 ******** 2026-04-09 05:53:49.884607 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:49.884616 | orchestrator | 2026-04-09 05:53:49.884626 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-09 05:53:49.884640 | orchestrator | Thursday 09 April 2026 05:53:48 +0000 (0:00:00.881) 0:42:50.823 ******** 2026-04-09 05:53:49.884650 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:49.884660 | orchestrator | 2026-04-09 05:53:49.884669 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-09 05:53:49.884679 | orchestrator | Thursday 09 April 2026 05:53:49 +0000 (0:00:00.773) 0:42:51.597 ******** 2026-04-09 05:53:49.884689 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:53:49.884715 | orchestrator | 2026-04-09 05:53:49.884730 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-09 05:54:53.400073 | orchestrator | Thursday 09 April 2026 05:53:50 +0000 (0:00:00.776) 0:42:52.373 ******** 2026-04-09 05:54:53.400191 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-4 2026-04-09 05:54:53.400209 | orchestrator | 2026-04-09 05:54:53.400223 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-09 05:54:53.400234 | orchestrator | Thursday 09 April 2026 05:53:51 +0000 (0:00:01.123) 0:42:53.497 ******** 2026-04-09 05:54:53.400245 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:54:53.400257 | orchestrator | 2026-04-09 05:54:53.400268 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-09 05:54:53.400279 | orchestrator | Thursday 09 April 2026 05:53:53 +0000 (0:00:01.485) 0:42:54.983 ******** 2026-04-09 05:54:53.400290 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:54:53.400301 | orchestrator | 2026-04-09 05:54:53.400312 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-09 05:54:53.400323 | orchestrator | Thursday 09 April 2026 05:53:56 +0000 (0:00:03.383) 0:42:58.366 ******** 2026-04-09 05:54:53.400333 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-4 2026-04-09 05:54:53.400344 | orchestrator | 2026-04-09 05:54:53.400355 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-09 05:54:53.400366 | orchestrator | Thursday 09 April 2026 05:53:57 +0000 (0:00:01.154) 0:42:59.521 ******** 2026-04-09 05:54:53.400377 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:54:53.400388 | orchestrator | 2026-04-09 05:54:53.400399 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-09 05:54:53.400435 | orchestrator | Thursday 09 April 2026 05:53:59 +0000 (0:00:01.994) 0:43:01.515 ******** 2026-04-09 05:54:53.400446 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:54:53.400457 | orchestrator | 2026-04-09 05:54:53.400468 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-09 05:54:53.400479 | orchestrator | Thursday 09 April 2026 05:54:01 +0000 (0:00:01.953) 0:43:03.468 ******** 2026-04-09 05:54:53.400490 | orchestrator | ok: [testbed-node-4] 2026-04-09 05:54:53.400500 | orchestrator | 2026-04-09 05:54:53.400511 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-09 05:54:53.400522 | orchestrator | Thursday 09 April 2026 05:54:03 +0000 (0:00:02.242) 0:43:05.711 ******** 2026-04-09 05:54:53.400533 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:54:53.400545 | orchestrator | 2026-04-09 05:54:53.400556 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-09 05:54:53.400566 | orchestrator | Thursday 09 April 2026 05:54:05 +0000 (0:00:01.190) 0:43:06.901 ******** 2026-04-09 05:54:53.400577 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:54:53.400588 | orchestrator | 2026-04-09 05:54:53.400599 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-09 05:54:53.400611 | orchestrator | Thursday 09 April 2026 05:54:06 +0000 (0:00:01.199) 0:43:08.101 ******** 2026-04-09 05:54:53.400624 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-04-09 05:54:53.400638 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-09 05:54:53.400654 | orchestrator | 2026-04-09 05:54:53.400673 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-09 05:54:53.400692 | orchestrator | Thursday 09 April 2026 05:54:08 +0000 (0:00:01.821) 0:43:09.923 ******** 2026-04-09 05:54:53.400753 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-04-09 05:54:53.400772 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-09 05:54:53.400790 | orchestrator | 2026-04-09 05:54:53.400808 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-09 05:54:53.400827 | orchestrator | Thursday 09 April 2026 05:54:10 +0000 (0:00:02.882) 0:43:12.805 ******** 2026-04-09 05:54:53.400845 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-09 05:54:53.400866 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-04-09 05:54:53.400884 | orchestrator | 2026-04-09 05:54:53.400903 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-09 05:54:53.400922 | orchestrator | Thursday 09 April 2026 05:54:15 +0000 (0:00:04.238) 0:43:17.044 ******** 2026-04-09 05:54:53.400940 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:54:53.400957 | orchestrator | 2026-04-09 05:54:53.400969 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-09 05:54:53.400980 | orchestrator | Thursday 09 April 2026 05:54:16 +0000 (0:00:00.962) 0:43:18.007 ******** 2026-04-09 05:54:53.400991 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:54:53.401001 | orchestrator | 2026-04-09 05:54:53.401013 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-09 05:54:53.401023 | orchestrator | Thursday 09 April 2026 05:54:16 +0000 (0:00:00.862) 0:43:18.870 ******** 2026-04-09 05:54:53.401034 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:54:53.401045 | orchestrator | 2026-04-09 05:54:53.401056 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-04-09 05:54:53.401067 | orchestrator | Thursday 09 April 2026 05:54:17 +0000 (0:00:00.940) 0:43:19.811 ******** 2026-04-09 05:54:53.401078 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:54:53.401089 | orchestrator | 2026-04-09 05:54:53.401100 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-04-09 05:54:53.401111 | orchestrator | Thursday 09 April 2026 05:54:18 +0000 (0:00:00.808) 0:43:20.620 ******** 2026-04-09 05:54:53.401122 | orchestrator | skipping: [testbed-node-4] 2026-04-09 05:54:53.401133 | orchestrator | 2026-04-09 05:54:53.401144 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-04-09 05:54:53.401183 | orchestrator | Thursday 09 April 2026 05:54:19 +0000 (0:00:00.762) 0:43:21.382 ******** 2026-04-09 05:54:53.401195 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-04-09 05:54:53.401207 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-04-09 05:54:53.401219 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-04-09 05:54:53.401250 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-04-09 05:54:53.401263 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-09 05:54:53.401273 | orchestrator | 2026-04-09 05:54:53.401285 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-04-09 05:54:53.401295 | orchestrator | 2026-04-09 05:54:53.401306 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 05:54:53.401317 | orchestrator | Thursday 09 April 2026 05:54:33 +0000 (0:00:14.034) 0:43:35.417 ******** 2026-04-09 05:54:53.401329 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-04-09 05:54:53.401339 | orchestrator | 2026-04-09 05:54:53.401350 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-09 05:54:53.401361 | orchestrator | Thursday 09 April 2026 05:54:34 +0000 (0:00:01.323) 0:43:36.740 ******** 2026-04-09 05:54:53.401372 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:54:53.401383 | orchestrator | 2026-04-09 05:54:53.401394 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-09 05:54:53.401404 | orchestrator | Thursday 09 April 2026 05:54:36 +0000 (0:00:01.429) 0:43:38.170 ******** 2026-04-09 05:54:53.401415 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:54:53.401426 | orchestrator | 2026-04-09 05:54:53.401437 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 05:54:53.401448 | orchestrator | Thursday 09 April 2026 05:54:37 +0000 (0:00:01.114) 0:43:39.284 ******** 2026-04-09 05:54:53.401459 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:54:53.401469 | orchestrator | 2026-04-09 05:54:53.401480 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 05:54:53.401491 | orchestrator | Thursday 09 April 2026 05:54:38 +0000 (0:00:01.410) 0:43:40.695 ******** 2026-04-09 05:54:53.401502 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:54:53.401513 | orchestrator | 2026-04-09 05:54:53.401524 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-09 05:54:53.401535 | orchestrator | Thursday 09 April 2026 05:54:39 +0000 (0:00:01.134) 0:43:41.830 ******** 2026-04-09 05:54:53.401546 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:54:53.401557 | orchestrator | 2026-04-09 05:54:53.401567 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-09 05:54:53.401578 | orchestrator | Thursday 09 April 2026 05:54:41 +0000 (0:00:01.153) 0:43:42.984 ******** 2026-04-09 05:54:53.401590 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:54:53.401600 | orchestrator | 2026-04-09 05:54:53.401611 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-09 05:54:53.401622 | orchestrator | Thursday 09 April 2026 05:54:42 +0000 (0:00:01.155) 0:43:44.139 ******** 2026-04-09 05:54:53.401633 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:54:53.401644 | orchestrator | 2026-04-09 05:54:53.401655 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-09 05:54:53.401665 | orchestrator | Thursday 09 April 2026 05:54:43 +0000 (0:00:01.132) 0:43:45.272 ******** 2026-04-09 05:54:53.401676 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:54:53.401687 | orchestrator | 2026-04-09 05:54:53.401733 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-09 05:54:53.401745 | orchestrator | Thursday 09 April 2026 05:54:44 +0000 (0:00:01.145) 0:43:46.417 ******** 2026-04-09 05:54:53.401756 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:54:53.401774 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:54:53.401785 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:54:53.401796 | orchestrator | 2026-04-09 05:54:53.401807 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-09 05:54:53.401818 | orchestrator | Thursday 09 April 2026 05:54:46 +0000 (0:00:02.059) 0:43:48.477 ******** 2026-04-09 05:54:53.401829 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:54:53.401840 | orchestrator | 2026-04-09 05:54:53.401851 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-09 05:54:53.401862 | orchestrator | Thursday 09 April 2026 05:54:47 +0000 (0:00:01.243) 0:43:49.721 ******** 2026-04-09 05:54:53.401872 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:54:53.401883 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:54:53.401894 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:54:53.401905 | orchestrator | 2026-04-09 05:54:53.401916 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-09 05:54:53.401927 | orchestrator | Thursday 09 April 2026 05:54:51 +0000 (0:00:03.281) 0:43:53.002 ******** 2026-04-09 05:54:53.401938 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-09 05:54:53.401949 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-09 05:54:53.401960 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-09 05:54:53.401971 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:54:53.401982 | orchestrator | 2026-04-09 05:54:53.401993 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-09 05:54:53.402004 | orchestrator | Thursday 09 April 2026 05:54:53 +0000 (0:00:01.890) 0:43:54.892 ******** 2026-04-09 05:54:53.402082 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 05:54:53.402110 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 05:55:14.269852 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 05:55:14.269974 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:55:14.269992 | orchestrator | 2026-04-09 05:55:14.270005 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-09 05:55:14.270073 | orchestrator | Thursday 09 April 2026 05:54:54 +0000 (0:00:01.582) 0:43:56.474 ******** 2026-04-09 05:55:14.270089 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:55:14.270105 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:55:14.270117 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 05:55:14.270151 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:55:14.270163 | orchestrator | 2026-04-09 05:55:14.270174 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-09 05:55:14.270186 | orchestrator | Thursday 09 April 2026 05:54:55 +0000 (0:00:01.193) 0:43:57.668 ******** 2026-04-09 05:55:14.270199 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '69d38aa54653', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-09 05:54:48.740897', 'end': '2026-04-09 05:54:48.804219', 'delta': '0:00:00.063322', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['69d38aa54653'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-09 05:55:14.270214 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '3e7867c40460', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-09 05:54:49.336386', 'end': '2026-04-09 05:54:49.385607', 'delta': '0:00:00.049221', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3e7867c40460'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-09 05:55:14.270258 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '5ed6058fb18c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-09 05:54:49.879489', 'end': '2026-04-09 05:54:49.938602', 'delta': '0:00:00.059113', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5ed6058fb18c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-09 05:55:14.270272 | orchestrator | 2026-04-09 05:55:14.270284 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-09 05:55:14.270295 | orchestrator | Thursday 09 April 2026 05:54:56 +0000 (0:00:01.190) 0:43:58.859 ******** 2026-04-09 05:55:14.270306 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:55:14.270318 | orchestrator | 2026-04-09 05:55:14.270329 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-09 05:55:14.270340 | orchestrator | Thursday 09 April 2026 05:54:58 +0000 (0:00:01.236) 0:44:00.095 ******** 2026-04-09 05:55:14.270352 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:55:14.270363 | orchestrator | 2026-04-09 05:55:14.270375 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-09 05:55:14.270388 | orchestrator | Thursday 09 April 2026 05:54:59 +0000 (0:00:01.260) 0:44:01.356 ******** 2026-04-09 05:55:14.270401 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:55:14.270414 | orchestrator | 2026-04-09 05:55:14.270427 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-09 05:55:14.270448 | orchestrator | Thursday 09 April 2026 05:55:00 +0000 (0:00:01.149) 0:44:02.505 ******** 2026-04-09 05:55:14.270462 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-09 05:55:14.270475 | orchestrator | 2026-04-09 05:55:14.270488 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 05:55:14.270501 | orchestrator | Thursday 09 April 2026 05:55:02 +0000 (0:00:02.050) 0:44:04.555 ******** 2026-04-09 05:55:14.270514 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:55:14.270526 | orchestrator | 2026-04-09 05:55:14.270557 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-09 05:55:14.270570 | orchestrator | Thursday 09 April 2026 05:55:03 +0000 (0:00:01.128) 0:44:05.684 ******** 2026-04-09 05:55:14.270583 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:55:14.270596 | orchestrator | 2026-04-09 05:55:14.270609 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-09 05:55:14.270621 | orchestrator | Thursday 09 April 2026 05:55:04 +0000 (0:00:01.156) 0:44:06.841 ******** 2026-04-09 05:55:14.270634 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:55:14.270646 | orchestrator | 2026-04-09 05:55:14.270659 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 05:55:14.270671 | orchestrator | Thursday 09 April 2026 05:55:06 +0000 (0:00:01.211) 0:44:08.053 ******** 2026-04-09 05:55:14.270684 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:55:14.270754 | orchestrator | 2026-04-09 05:55:14.270771 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-09 05:55:14.270784 | orchestrator | Thursday 09 April 2026 05:55:07 +0000 (0:00:01.187) 0:44:09.240 ******** 2026-04-09 05:55:14.270796 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:55:14.270807 | orchestrator | 2026-04-09 05:55:14.270818 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-09 05:55:14.270829 | orchestrator | Thursday 09 April 2026 05:55:08 +0000 (0:00:01.081) 0:44:10.321 ******** 2026-04-09 05:55:14.270840 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:55:14.270851 | orchestrator | 2026-04-09 05:55:14.270863 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-09 05:55:14.270874 | orchestrator | Thursday 09 April 2026 05:55:09 +0000 (0:00:01.111) 0:44:11.433 ******** 2026-04-09 05:55:14.270885 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:55:14.270896 | orchestrator | 2026-04-09 05:55:14.270908 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-09 05:55:14.270919 | orchestrator | Thursday 09 April 2026 05:55:10 +0000 (0:00:01.184) 0:44:12.618 ******** 2026-04-09 05:55:14.270930 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:55:14.270941 | orchestrator | 2026-04-09 05:55:14.270952 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-09 05:55:14.270963 | orchestrator | Thursday 09 April 2026 05:55:11 +0000 (0:00:01.127) 0:44:13.745 ******** 2026-04-09 05:55:14.270975 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:55:14.270986 | orchestrator | 2026-04-09 05:55:14.270997 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-09 05:55:14.271009 | orchestrator | Thursday 09 April 2026 05:55:12 +0000 (0:00:01.102) 0:44:14.848 ******** 2026-04-09 05:55:14.271020 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:55:14.271031 | orchestrator | 2026-04-09 05:55:14.271042 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-09 05:55:14.271054 | orchestrator | Thursday 09 April 2026 05:55:14 +0000 (0:00:01.174) 0:44:16.022 ******** 2026-04-09 05:55:14.271065 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:55:14.271100 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6-osd--block--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6', 'dm-uuid-LVM-9vxFUtTAtak5ZkuWrKKfGIkUHDKTWX7JDm3eA8Y5r9JAYPgNze2tzfH9lV9upfLN'], 'uuids': ['0d8306b6-b8d9-4741-84fa-e650942907f5'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '82469e2d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Dm3eA8-Y5r9-JAYP-gNze-2tzf-H9lV-9upfLN']}})  2026-04-09 05:55:14.392016 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d', 'scsi-SQEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e55aa834', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 05:55:14.392115 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-mluLyS-UGtI-41vG-BPyx-ooVb-U8x0-Mwltl0', 'scsi-0QEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e', 'scsi-SQEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1aa61eee', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--27c4b53f--c2bf--5253--84b2--9319684e0f9e-osd--block--27c4b53f--c2bf--5253--84b2--9319684e0f9e']}})  2026-04-09 05:55:14.392132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:55:14.392147 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:55:14.392159 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-09 05:55:14.392172 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:55:14.392219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-IBYDKl-rKL1-8ENK-M1lO-FBMz-Nh1V-9LMTjE', 'dm-uuid-CRYPT-LUKS2-a0c575bd231a435faa33ebc924c5d720-IBYDKl-rKL1-8ENK-M1lO-FBMz-Nh1V-9LMTjE'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-09 05:55:14.392248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:55:14.392261 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--27c4b53f--c2bf--5253--84b2--9319684e0f9e-osd--block--27c4b53f--c2bf--5253--84b2--9319684e0f9e', 'dm-uuid-LVM-Pc55YnmQ0VktCZR8gyBjVYB68xFeQ7SyIBYDKlrKL18ENKM1lOFBMzNh1V9LMTjE'], 'uuids': ['a0c575bd-231a-435f-aa33-ebc924c5d720'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1aa61eee', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['IBYDKl-rKL1-8ENK-M1lO-FBMz-Nh1V-9LMTjE']}})  2026-04-09 05:55:14.392274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-BrMDw9-eTd2-RE46-4U0W-jzmp-yuNA-1uTxr3', 'scsi-0QEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4', 'scsi-SQEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '82469e2d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6-osd--block--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6']}})  2026-04-09 05:55:14.392286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:55:14.392315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '53e4edfb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part16', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part14', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part15', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part1', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 05:55:15.794668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:55:15.794817 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 05:55:15.794832 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Dm3eA8-Y5r9-JAYP-gNze-2tzf-H9lV-9upfLN', 'dm-uuid-CRYPT-LUKS2-0d8306b6b8d9474184fae650942907f5-Dm3eA8-Y5r9-JAYP-gNze-2tzf-H9lV-9upfLN'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-09 05:55:15.794842 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:55:15.794851 | orchestrator | 2026-04-09 05:55:15.794859 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-09 05:55:15.794867 | orchestrator | Thursday 09 April 2026 05:55:15 +0000 (0:00:01.406) 0:44:17.428 ******** 2026-04-09 05:55:15.794877 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:55:15.794886 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6-osd--block--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6', 'dm-uuid-LVM-9vxFUtTAtak5ZkuWrKKfGIkUHDKTWX7JDm3eA8Y5r9JAYPgNze2tzfH9lV9upfLN'], 'uuids': ['0d8306b6-b8d9-4741-84fa-e650942907f5'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '82469e2d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Dm3eA8-Y5r9-JAYP-gNze-2tzf-H9lV-9upfLN']}}, 'ansible_loop_var': 'item'})  2026-04-09 05:55:15.794929 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d', 'scsi-SQEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e55aa834', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:55:15.794953 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-mluLyS-UGtI-41vG-BPyx-ooVb-U8x0-Mwltl0', 'scsi-0QEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e', 'scsi-SQEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1aa61eee', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--27c4b53f--c2bf--5253--84b2--9319684e0f9e-osd--block--27c4b53f--c2bf--5253--84b2--9319684e0f9e']}}, 'ansible_loop_var': 'item'})  2026-04-09 05:55:15.794964 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:55:15.794973 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:55:15.794981 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:55:15.795002 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:55:15.795014 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-IBYDKl-rKL1-8ENK-M1lO-FBMz-Nh1V-9LMTjE', 'dm-uuid-CRYPT-LUKS2-a0c575bd231a435faa33ebc924c5d720-IBYDKl-rKL1-8ENK-M1lO-FBMz-Nh1V-9LMTjE'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:55:21.174813 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:55:21.174885 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--27c4b53f--c2bf--5253--84b2--9319684e0f9e-osd--block--27c4b53f--c2bf--5253--84b2--9319684e0f9e', 'dm-uuid-LVM-Pc55YnmQ0VktCZR8gyBjVYB68xFeQ7SyIBYDKlrKL18ENKM1lOFBMzNh1V9LMTjE'], 'uuids': ['a0c575bd-231a-435f-aa33-ebc924c5d720'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1aa61eee', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['IBYDKl-rKL1-8ENK-M1lO-FBMz-Nh1V-9LMTjE']}}, 'ansible_loop_var': 'item'})  2026-04-09 05:55:21.174894 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-BrMDw9-eTd2-RE46-4U0W-jzmp-yuNA-1uTxr3', 'scsi-0QEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4', 'scsi-SQEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '82469e2d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6-osd--block--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6']}}, 'ansible_loop_var': 'item'})  2026-04-09 05:55:21.174913 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:55:21.174937 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '53e4edfb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part16', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part14', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part15', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part1', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:55:21.174943 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:55:21.174949 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:55:21.174962 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Dm3eA8-Y5r9-JAYP-gNze-2tzf-H9lV-9upfLN', 'dm-uuid-CRYPT-LUKS2-0d8306b6b8d9474184fae650942907f5-Dm3eA8-Y5r9-JAYP-gNze-2tzf-H9lV-9upfLN'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 05:55:21.174967 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:55:21.174972 | orchestrator | 2026-04-09 05:55:21.174977 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-09 05:55:21.174983 | orchestrator | Thursday 09 April 2026 05:55:17 +0000 (0:00:01.457) 0:44:18.886 ******** 2026-04-09 05:55:21.174987 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:55:21.174992 | orchestrator | 2026-04-09 05:55:21.174996 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-09 05:55:21.175000 | orchestrator | Thursday 09 April 2026 05:55:18 +0000 (0:00:01.523) 0:44:20.410 ******** 2026-04-09 05:55:21.175003 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:55:21.175007 | orchestrator | 2026-04-09 05:55:21.175011 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 05:55:21.175015 | orchestrator | Thursday 09 April 2026 05:55:19 +0000 (0:00:01.099) 0:44:21.509 ******** 2026-04-09 05:55:21.175019 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:55:21.175023 | orchestrator | 2026-04-09 05:55:21.175027 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 05:55:21.175033 | orchestrator | Thursday 09 April 2026 05:55:21 +0000 (0:00:01.530) 0:44:23.040 ******** 2026-04-09 05:56:03.593153 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:03.593274 | orchestrator | 2026-04-09 05:56:03.593301 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 05:56:03.593323 | orchestrator | Thursday 09 April 2026 05:55:22 +0000 (0:00:01.171) 0:44:24.212 ******** 2026-04-09 05:56:03.593340 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:03.593358 | orchestrator | 2026-04-09 05:56:03.593376 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 05:56:03.593394 | orchestrator | Thursday 09 April 2026 05:55:23 +0000 (0:00:01.258) 0:44:25.471 ******** 2026-04-09 05:56:03.593411 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:03.593430 | orchestrator | 2026-04-09 05:56:03.593447 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 05:56:03.593467 | orchestrator | Thursday 09 April 2026 05:55:24 +0000 (0:00:01.149) 0:44:26.620 ******** 2026-04-09 05:56:03.593486 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-09 05:56:03.593507 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-09 05:56:03.593526 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-09 05:56:03.593545 | orchestrator | 2026-04-09 05:56:03.593557 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 05:56:03.593568 | orchestrator | Thursday 09 April 2026 05:55:26 +0000 (0:00:02.117) 0:44:28.738 ******** 2026-04-09 05:56:03.593579 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-09 05:56:03.593617 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-09 05:56:03.593629 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-09 05:56:03.593640 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:03.593651 | orchestrator | 2026-04-09 05:56:03.593675 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-09 05:56:03.593689 | orchestrator | Thursday 09 April 2026 05:55:28 +0000 (0:00:01.151) 0:44:29.889 ******** 2026-04-09 05:56:03.593727 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-04-09 05:56:03.593742 | orchestrator | 2026-04-09 05:56:03.593755 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 05:56:03.593770 | orchestrator | Thursday 09 April 2026 05:55:29 +0000 (0:00:01.139) 0:44:31.029 ******** 2026-04-09 05:56:03.593783 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:03.593796 | orchestrator | 2026-04-09 05:56:03.593809 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 05:56:03.593822 | orchestrator | Thursday 09 April 2026 05:55:30 +0000 (0:00:01.156) 0:44:32.185 ******** 2026-04-09 05:56:03.593836 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:03.593848 | orchestrator | 2026-04-09 05:56:03.593861 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 05:56:03.593874 | orchestrator | Thursday 09 April 2026 05:55:31 +0000 (0:00:01.206) 0:44:33.392 ******** 2026-04-09 05:56:03.593887 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:03.593900 | orchestrator | 2026-04-09 05:56:03.593913 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 05:56:03.593926 | orchestrator | Thursday 09 April 2026 05:55:32 +0000 (0:00:01.175) 0:44:34.568 ******** 2026-04-09 05:56:03.593940 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:56:03.593953 | orchestrator | 2026-04-09 05:56:03.593967 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 05:56:03.593980 | orchestrator | Thursday 09 April 2026 05:55:33 +0000 (0:00:01.295) 0:44:35.864 ******** 2026-04-09 05:56:03.593993 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-09 05:56:03.594007 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-09 05:56:03.594081 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-09 05:56:03.594092 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:03.594103 | orchestrator | 2026-04-09 05:56:03.594114 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 05:56:03.594125 | orchestrator | Thursday 09 April 2026 05:55:35 +0000 (0:00:01.439) 0:44:37.304 ******** 2026-04-09 05:56:03.594136 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-09 05:56:03.594147 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-09 05:56:03.594158 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-09 05:56:03.594169 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:03.594181 | orchestrator | 2026-04-09 05:56:03.594191 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 05:56:03.594202 | orchestrator | Thursday 09 April 2026 05:55:36 +0000 (0:00:01.487) 0:44:38.791 ******** 2026-04-09 05:56:03.594228 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-09 05:56:03.594240 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-09 05:56:03.594251 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-09 05:56:03.594263 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:03.594273 | orchestrator | 2026-04-09 05:56:03.594284 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 05:56:03.594296 | orchestrator | Thursday 09 April 2026 05:55:38 +0000 (0:00:01.471) 0:44:40.263 ******** 2026-04-09 05:56:03.594307 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:56:03.594327 | orchestrator | 2026-04-09 05:56:03.594338 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 05:56:03.594349 | orchestrator | Thursday 09 April 2026 05:55:39 +0000 (0:00:01.242) 0:44:41.506 ******** 2026-04-09 05:56:03.594360 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-09 05:56:03.594371 | orchestrator | 2026-04-09 05:56:03.594382 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-09 05:56:03.594393 | orchestrator | Thursday 09 April 2026 05:55:41 +0000 (0:00:01.758) 0:44:43.265 ******** 2026-04-09 05:56:03.594422 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:56:03.594433 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:56:03.594444 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:56:03.594455 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 05:56:03.594466 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 05:56:03.594477 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-09 05:56:03.594489 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 05:56:03.594499 | orchestrator | 2026-04-09 05:56:03.594510 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-09 05:56:03.594521 | orchestrator | Thursday 09 April 2026 05:55:43 +0000 (0:00:02.182) 0:44:45.447 ******** 2026-04-09 05:56:03.594532 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:56:03.594543 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:56:03.594554 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:56:03.594565 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 05:56:03.594576 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 05:56:03.594587 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-09 05:56:03.594598 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 05:56:03.594609 | orchestrator | 2026-04-09 05:56:03.594620 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-04-09 05:56:03.594631 | orchestrator | Thursday 09 April 2026 05:55:46 +0000 (0:00:02.672) 0:44:48.120 ******** 2026-04-09 05:56:03.594642 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:56:03.594653 | orchestrator | 2026-04-09 05:56:03.594664 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-04-09 05:56:03.594675 | orchestrator | Thursday 09 April 2026 05:55:47 +0000 (0:00:01.105) 0:44:49.226 ******** 2026-04-09 05:56:03.594686 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:56:03.594697 | orchestrator | 2026-04-09 05:56:03.594727 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-04-09 05:56:03.594738 | orchestrator | Thursday 09 April 2026 05:55:48 +0000 (0:00:00.809) 0:44:50.036 ******** 2026-04-09 05:56:03.594749 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:56:03.594760 | orchestrator | 2026-04-09 05:56:03.594771 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-04-09 05:56:03.594782 | orchestrator | Thursday 09 April 2026 05:55:49 +0000 (0:00:00.874) 0:44:50.911 ******** 2026-04-09 05:56:03.594793 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-09 05:56:03.594804 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-09 05:56:03.594816 | orchestrator | 2026-04-09 05:56:03.594827 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 05:56:03.594838 | orchestrator | Thursday 09 April 2026 05:55:52 +0000 (0:00:03.760) 0:44:54.672 ******** 2026-04-09 05:56:03.594849 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-04-09 05:56:03.594866 | orchestrator | 2026-04-09 05:56:03.594877 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 05:56:03.594888 | orchestrator | Thursday 09 April 2026 05:55:53 +0000 (0:00:01.087) 0:44:55.760 ******** 2026-04-09 05:56:03.594899 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-04-09 05:56:03.594911 | orchestrator | 2026-04-09 05:56:03.594921 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 05:56:03.594932 | orchestrator | Thursday 09 April 2026 05:55:54 +0000 (0:00:01.108) 0:44:56.868 ******** 2026-04-09 05:56:03.594943 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:03.594954 | orchestrator | 2026-04-09 05:56:03.594965 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 05:56:03.594976 | orchestrator | Thursday 09 April 2026 05:55:56 +0000 (0:00:01.122) 0:44:57.991 ******** 2026-04-09 05:56:03.594987 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:56:03.594999 | orchestrator | 2026-04-09 05:56:03.595010 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 05:56:03.595021 | orchestrator | Thursday 09 April 2026 05:55:57 +0000 (0:00:01.492) 0:44:59.483 ******** 2026-04-09 05:56:03.595037 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:56:03.595048 | orchestrator | 2026-04-09 05:56:03.595059 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 05:56:03.595070 | orchestrator | Thursday 09 April 2026 05:55:59 +0000 (0:00:01.569) 0:45:01.053 ******** 2026-04-09 05:56:03.595081 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:56:03.595092 | orchestrator | 2026-04-09 05:56:03.595103 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 05:56:03.595114 | orchestrator | Thursday 09 April 2026 05:56:01 +0000 (0:00:01.973) 0:45:03.027 ******** 2026-04-09 05:56:03.595125 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:03.595136 | orchestrator | 2026-04-09 05:56:03.595147 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 05:56:03.595158 | orchestrator | Thursday 09 April 2026 05:56:02 +0000 (0:00:01.156) 0:45:04.183 ******** 2026-04-09 05:56:03.595169 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:03.595180 | orchestrator | 2026-04-09 05:56:03.595191 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 05:56:03.595202 | orchestrator | Thursday 09 April 2026 05:56:03 +0000 (0:00:01.138) 0:45:05.322 ******** 2026-04-09 05:56:03.595213 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:03.595224 | orchestrator | 2026-04-09 05:56:03.595242 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 05:56:43.532906 | orchestrator | Thursday 09 April 2026 05:56:04 +0000 (0:00:01.143) 0:45:06.466 ******** 2026-04-09 05:56:43.533020 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:56:43.533038 | orchestrator | 2026-04-09 05:56:43.533051 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 05:56:43.533063 | orchestrator | Thursday 09 April 2026 05:56:06 +0000 (0:00:01.548) 0:45:08.014 ******** 2026-04-09 05:56:43.533074 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:56:43.533085 | orchestrator | 2026-04-09 05:56:43.533097 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 05:56:43.533108 | orchestrator | Thursday 09 April 2026 05:56:07 +0000 (0:00:01.520) 0:45:09.534 ******** 2026-04-09 05:56:43.533120 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:43.533132 | orchestrator | 2026-04-09 05:56:43.533144 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 05:56:43.533155 | orchestrator | Thursday 09 April 2026 05:56:08 +0000 (0:00:00.867) 0:45:10.402 ******** 2026-04-09 05:56:43.533166 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:43.533177 | orchestrator | 2026-04-09 05:56:43.533188 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 05:56:43.533200 | orchestrator | Thursday 09 April 2026 05:56:09 +0000 (0:00:00.763) 0:45:11.166 ******** 2026-04-09 05:56:43.533234 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:56:43.533247 | orchestrator | 2026-04-09 05:56:43.533258 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 05:56:43.533269 | orchestrator | Thursday 09 April 2026 05:56:10 +0000 (0:00:00.794) 0:45:11.961 ******** 2026-04-09 05:56:43.533281 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:56:43.533292 | orchestrator | 2026-04-09 05:56:43.533303 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 05:56:43.533314 | orchestrator | Thursday 09 April 2026 05:56:10 +0000 (0:00:00.780) 0:45:12.741 ******** 2026-04-09 05:56:43.533325 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:56:43.533338 | orchestrator | 2026-04-09 05:56:43.533351 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 05:56:43.533364 | orchestrator | Thursday 09 April 2026 05:56:11 +0000 (0:00:00.785) 0:45:13.527 ******** 2026-04-09 05:56:43.533377 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:43.533390 | orchestrator | 2026-04-09 05:56:43.533403 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 05:56:43.533416 | orchestrator | Thursday 09 April 2026 05:56:12 +0000 (0:00:00.777) 0:45:14.304 ******** 2026-04-09 05:56:43.533428 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:43.533441 | orchestrator | 2026-04-09 05:56:43.533455 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 05:56:43.533468 | orchestrator | Thursday 09 April 2026 05:56:13 +0000 (0:00:00.791) 0:45:15.096 ******** 2026-04-09 05:56:43.533480 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:43.533493 | orchestrator | 2026-04-09 05:56:43.533506 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 05:56:43.533518 | orchestrator | Thursday 09 April 2026 05:56:14 +0000 (0:00:00.852) 0:45:15.949 ******** 2026-04-09 05:56:43.533532 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:56:43.533545 | orchestrator | 2026-04-09 05:56:43.533557 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 05:56:43.533570 | orchestrator | Thursday 09 April 2026 05:56:14 +0000 (0:00:00.850) 0:45:16.799 ******** 2026-04-09 05:56:43.533583 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:56:43.533595 | orchestrator | 2026-04-09 05:56:43.533608 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-09 05:56:43.533621 | orchestrator | Thursday 09 April 2026 05:56:15 +0000 (0:00:00.815) 0:45:17.615 ******** 2026-04-09 05:56:43.533633 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:43.533645 | orchestrator | 2026-04-09 05:56:43.533658 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-09 05:56:43.533672 | orchestrator | Thursday 09 April 2026 05:56:16 +0000 (0:00:00.787) 0:45:18.403 ******** 2026-04-09 05:56:43.533684 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:43.533697 | orchestrator | 2026-04-09 05:56:43.533745 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-09 05:56:43.533757 | orchestrator | Thursday 09 April 2026 05:56:17 +0000 (0:00:00.769) 0:45:19.172 ******** 2026-04-09 05:56:43.533768 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:43.533779 | orchestrator | 2026-04-09 05:56:43.533790 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-09 05:56:43.533801 | orchestrator | Thursday 09 April 2026 05:56:18 +0000 (0:00:00.775) 0:45:19.948 ******** 2026-04-09 05:56:43.533812 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:43.533823 | orchestrator | 2026-04-09 05:56:43.533833 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-09 05:56:43.533861 | orchestrator | Thursday 09 April 2026 05:56:18 +0000 (0:00:00.812) 0:45:20.760 ******** 2026-04-09 05:56:43.533872 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:43.533883 | orchestrator | 2026-04-09 05:56:43.533894 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-09 05:56:43.533905 | orchestrator | Thursday 09 April 2026 05:56:19 +0000 (0:00:00.760) 0:45:21.521 ******** 2026-04-09 05:56:43.533924 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:43.533935 | orchestrator | 2026-04-09 05:56:43.533947 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-09 05:56:43.533958 | orchestrator | Thursday 09 April 2026 05:56:20 +0000 (0:00:00.812) 0:45:22.333 ******** 2026-04-09 05:56:43.533968 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:43.533979 | orchestrator | 2026-04-09 05:56:43.533990 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-09 05:56:43.534002 | orchestrator | Thursday 09 April 2026 05:56:21 +0000 (0:00:00.756) 0:45:23.090 ******** 2026-04-09 05:56:43.534013 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:43.534084 | orchestrator | 2026-04-09 05:56:43.534096 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-09 05:56:43.534107 | orchestrator | Thursday 09 April 2026 05:56:21 +0000 (0:00:00.753) 0:45:23.844 ******** 2026-04-09 05:56:43.534136 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:43.534149 | orchestrator | 2026-04-09 05:56:43.534160 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-09 05:56:43.534171 | orchestrator | Thursday 09 April 2026 05:56:22 +0000 (0:00:00.845) 0:45:24.690 ******** 2026-04-09 05:56:43.534182 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:43.534193 | orchestrator | 2026-04-09 05:56:43.534204 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-09 05:56:43.534215 | orchestrator | Thursday 09 April 2026 05:56:23 +0000 (0:00:00.875) 0:45:25.565 ******** 2026-04-09 05:56:43.534226 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:43.534237 | orchestrator | 2026-04-09 05:56:43.534248 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-09 05:56:43.534259 | orchestrator | Thursday 09 April 2026 05:56:24 +0000 (0:00:00.814) 0:45:26.380 ******** 2026-04-09 05:56:43.534270 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:43.534281 | orchestrator | 2026-04-09 05:56:43.534292 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-09 05:56:43.534303 | orchestrator | Thursday 09 April 2026 05:56:25 +0000 (0:00:00.780) 0:45:27.161 ******** 2026-04-09 05:56:43.534314 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:56:43.534325 | orchestrator | 2026-04-09 05:56:43.534336 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-09 05:56:43.534347 | orchestrator | Thursday 09 April 2026 05:56:26 +0000 (0:00:01.552) 0:45:28.713 ******** 2026-04-09 05:56:43.534358 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:56:43.534369 | orchestrator | 2026-04-09 05:56:43.534380 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-09 05:56:43.534391 | orchestrator | Thursday 09 April 2026 05:56:28 +0000 (0:00:01.844) 0:45:30.558 ******** 2026-04-09 05:56:43.534402 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-04-09 05:56:43.534414 | orchestrator | 2026-04-09 05:56:43.534425 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-09 05:56:43.534436 | orchestrator | Thursday 09 April 2026 05:56:29 +0000 (0:00:01.118) 0:45:31.676 ******** 2026-04-09 05:56:43.534447 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:43.534458 | orchestrator | 2026-04-09 05:56:43.534469 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-09 05:56:43.534480 | orchestrator | Thursday 09 April 2026 05:56:30 +0000 (0:00:01.154) 0:45:32.831 ******** 2026-04-09 05:56:43.534491 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:43.534502 | orchestrator | 2026-04-09 05:56:43.534513 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-09 05:56:43.534525 | orchestrator | Thursday 09 April 2026 05:56:32 +0000 (0:00:01.178) 0:45:34.010 ******** 2026-04-09 05:56:43.534535 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 05:56:43.534547 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 05:56:43.534565 | orchestrator | 2026-04-09 05:56:43.534576 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-09 05:56:43.534587 | orchestrator | Thursday 09 April 2026 05:56:33 +0000 (0:00:01.812) 0:45:35.822 ******** 2026-04-09 05:56:43.534598 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:56:43.534609 | orchestrator | 2026-04-09 05:56:43.534620 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-09 05:56:43.534631 | orchestrator | Thursday 09 April 2026 05:56:35 +0000 (0:00:01.432) 0:45:37.255 ******** 2026-04-09 05:56:43.534642 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:43.534653 | orchestrator | 2026-04-09 05:56:43.534664 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-09 05:56:43.534675 | orchestrator | Thursday 09 April 2026 05:56:36 +0000 (0:00:01.118) 0:45:38.374 ******** 2026-04-09 05:56:43.534686 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:43.534697 | orchestrator | 2026-04-09 05:56:43.534729 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-09 05:56:43.534740 | orchestrator | Thursday 09 April 2026 05:56:37 +0000 (0:00:00.853) 0:45:39.228 ******** 2026-04-09 05:56:43.534751 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:43.534762 | orchestrator | 2026-04-09 05:56:43.534773 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-09 05:56:43.534784 | orchestrator | Thursday 09 April 2026 05:56:38 +0000 (0:00:00.764) 0:45:39.992 ******** 2026-04-09 05:56:43.534795 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-04-09 05:56:43.534806 | orchestrator | 2026-04-09 05:56:43.534817 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-09 05:56:43.534834 | orchestrator | Thursday 09 April 2026 05:56:39 +0000 (0:00:01.154) 0:45:41.147 ******** 2026-04-09 05:56:43.534845 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:56:43.534856 | orchestrator | 2026-04-09 05:56:43.534867 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-09 05:56:43.534878 | orchestrator | Thursday 09 April 2026 05:56:41 +0000 (0:00:01.861) 0:45:43.009 ******** 2026-04-09 05:56:43.534889 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 05:56:43.534899 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 05:56:43.534910 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 05:56:43.534921 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:43.534932 | orchestrator | 2026-04-09 05:56:43.534943 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-09 05:56:43.534954 | orchestrator | Thursday 09 April 2026 05:56:42 +0000 (0:00:01.152) 0:45:44.161 ******** 2026-04-09 05:56:43.534964 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:56:43.534975 | orchestrator | 2026-04-09 05:56:43.534986 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-09 05:56:43.534997 | orchestrator | Thursday 09 April 2026 05:56:43 +0000 (0:00:01.136) 0:45:45.298 ******** 2026-04-09 05:56:43.535015 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:57:27.087487 | orchestrator | 2026-04-09 05:57:27.087614 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-09 05:57:27.087630 | orchestrator | Thursday 09 April 2026 05:56:44 +0000 (0:00:01.202) 0:45:46.500 ******** 2026-04-09 05:57:27.087641 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:57:27.087652 | orchestrator | 2026-04-09 05:57:27.087662 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-09 05:57:27.087672 | orchestrator | Thursday 09 April 2026 05:56:45 +0000 (0:00:01.198) 0:45:47.699 ******** 2026-04-09 05:57:27.087682 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:57:27.087691 | orchestrator | 2026-04-09 05:57:27.087701 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-09 05:57:27.087821 | orchestrator | Thursday 09 April 2026 05:56:47 +0000 (0:00:01.196) 0:45:48.896 ******** 2026-04-09 05:57:27.087835 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:57:27.087845 | orchestrator | 2026-04-09 05:57:27.087855 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-09 05:57:27.087865 | orchestrator | Thursday 09 April 2026 05:56:47 +0000 (0:00:00.802) 0:45:49.698 ******** 2026-04-09 05:57:27.087874 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:57:27.087885 | orchestrator | 2026-04-09 05:57:27.087895 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-09 05:57:27.087906 | orchestrator | Thursday 09 April 2026 05:56:49 +0000 (0:00:02.096) 0:45:51.795 ******** 2026-04-09 05:57:27.087915 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:57:27.087925 | orchestrator | 2026-04-09 05:57:27.087935 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-09 05:57:27.087944 | orchestrator | Thursday 09 April 2026 05:56:50 +0000 (0:00:00.795) 0:45:52.591 ******** 2026-04-09 05:57:27.087954 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-04-09 05:57:27.087964 | orchestrator | 2026-04-09 05:57:27.087973 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-09 05:57:27.087983 | orchestrator | Thursday 09 April 2026 05:56:51 +0000 (0:00:01.247) 0:45:53.838 ******** 2026-04-09 05:57:27.087992 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:57:27.088002 | orchestrator | 2026-04-09 05:57:27.088012 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-09 05:57:27.088021 | orchestrator | Thursday 09 April 2026 05:56:53 +0000 (0:00:01.131) 0:45:54.970 ******** 2026-04-09 05:57:27.088033 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:57:27.088044 | orchestrator | 2026-04-09 05:57:27.088056 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-09 05:57:27.088067 | orchestrator | Thursday 09 April 2026 05:56:54 +0000 (0:00:01.205) 0:45:56.176 ******** 2026-04-09 05:57:27.088078 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:57:27.088090 | orchestrator | 2026-04-09 05:57:27.088101 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-09 05:57:27.088112 | orchestrator | Thursday 09 April 2026 05:56:55 +0000 (0:00:01.199) 0:45:57.375 ******** 2026-04-09 05:57:27.088123 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:57:27.088135 | orchestrator | 2026-04-09 05:57:27.088147 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-09 05:57:27.088158 | orchestrator | Thursday 09 April 2026 05:56:56 +0000 (0:00:01.123) 0:45:58.499 ******** 2026-04-09 05:57:27.088170 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:57:27.088180 | orchestrator | 2026-04-09 05:57:27.088192 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-09 05:57:27.088203 | orchestrator | Thursday 09 April 2026 05:56:57 +0000 (0:00:01.139) 0:45:59.638 ******** 2026-04-09 05:57:27.088215 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:57:27.088226 | orchestrator | 2026-04-09 05:57:27.088237 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-09 05:57:27.088249 | orchestrator | Thursday 09 April 2026 05:56:58 +0000 (0:00:01.124) 0:46:00.763 ******** 2026-04-09 05:57:27.088260 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:57:27.088272 | orchestrator | 2026-04-09 05:57:27.088283 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-09 05:57:27.088294 | orchestrator | Thursday 09 April 2026 05:57:00 +0000 (0:00:01.190) 0:46:01.954 ******** 2026-04-09 05:57:27.088305 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:57:27.088317 | orchestrator | 2026-04-09 05:57:27.088327 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-09 05:57:27.088339 | orchestrator | Thursday 09 April 2026 05:57:01 +0000 (0:00:01.146) 0:46:03.100 ******** 2026-04-09 05:57:27.088350 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:57:27.088362 | orchestrator | 2026-04-09 05:57:27.088374 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-09 05:57:27.088407 | orchestrator | Thursday 09 April 2026 05:57:02 +0000 (0:00:00.808) 0:46:03.908 ******** 2026-04-09 05:57:27.088417 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-04-09 05:57:27.088428 | orchestrator | 2026-04-09 05:57:27.088438 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-09 05:57:27.088448 | orchestrator | Thursday 09 April 2026 05:57:03 +0000 (0:00:01.084) 0:46:04.993 ******** 2026-04-09 05:57:27.088457 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-04-09 05:57:27.088467 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-09 05:57:27.088477 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-09 05:57:27.088487 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-09 05:57:27.088496 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-09 05:57:27.088505 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-09 05:57:27.088515 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-09 05:57:27.088524 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-09 05:57:27.088535 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 05:57:27.088561 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 05:57:27.088571 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 05:57:27.088581 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 05:57:27.088590 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 05:57:27.088600 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 05:57:27.088609 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-04-09 05:57:27.088619 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-04-09 05:57:27.088629 | orchestrator | 2026-04-09 05:57:27.088639 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-09 05:57:27.088648 | orchestrator | Thursday 09 April 2026 05:57:09 +0000 (0:00:06.562) 0:46:11.556 ******** 2026-04-09 05:57:27.088658 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-04-09 05:57:27.088667 | orchestrator | 2026-04-09 05:57:27.088677 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-09 05:57:27.088686 | orchestrator | Thursday 09 April 2026 05:57:10 +0000 (0:00:01.191) 0:46:12.747 ******** 2026-04-09 05:57:27.088696 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 05:57:27.088707 | orchestrator | 2026-04-09 05:57:27.088733 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-09 05:57:27.088743 | orchestrator | Thursday 09 April 2026 05:57:12 +0000 (0:00:01.490) 0:46:14.237 ******** 2026-04-09 05:57:27.088753 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 05:57:27.088762 | orchestrator | 2026-04-09 05:57:27.088772 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-09 05:57:27.088782 | orchestrator | Thursday 09 April 2026 05:57:13 +0000 (0:00:01.558) 0:46:15.796 ******** 2026-04-09 05:57:27.088792 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:57:27.088801 | orchestrator | 2026-04-09 05:57:27.088811 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-09 05:57:27.088821 | orchestrator | Thursday 09 April 2026 05:57:14 +0000 (0:00:00.787) 0:46:16.583 ******** 2026-04-09 05:57:27.088830 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:57:27.088840 | orchestrator | 2026-04-09 05:57:27.088850 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-09 05:57:27.088859 | orchestrator | Thursday 09 April 2026 05:57:15 +0000 (0:00:00.794) 0:46:17.378 ******** 2026-04-09 05:57:27.088875 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:57:27.088885 | orchestrator | 2026-04-09 05:57:27.088895 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-09 05:57:27.088904 | orchestrator | Thursday 09 April 2026 05:57:16 +0000 (0:00:00.887) 0:46:18.265 ******** 2026-04-09 05:57:27.088914 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:57:27.088923 | orchestrator | 2026-04-09 05:57:27.088933 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-09 05:57:27.088943 | orchestrator | Thursday 09 April 2026 05:57:17 +0000 (0:00:00.835) 0:46:19.101 ******** 2026-04-09 05:57:27.088952 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:57:27.088962 | orchestrator | 2026-04-09 05:57:27.088972 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-09 05:57:27.088982 | orchestrator | Thursday 09 April 2026 05:57:18 +0000 (0:00:00.772) 0:46:19.873 ******** 2026-04-09 05:57:27.088991 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:57:27.089001 | orchestrator | 2026-04-09 05:57:27.089010 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-09 05:57:27.089020 | orchestrator | Thursday 09 April 2026 05:57:18 +0000 (0:00:00.793) 0:46:20.667 ******** 2026-04-09 05:57:27.089030 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:57:27.089039 | orchestrator | 2026-04-09 05:57:27.089049 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-09 05:57:27.089059 | orchestrator | Thursday 09 April 2026 05:57:19 +0000 (0:00:00.779) 0:46:21.447 ******** 2026-04-09 05:57:27.089068 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:57:27.089078 | orchestrator | 2026-04-09 05:57:27.089087 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-09 05:57:27.089097 | orchestrator | Thursday 09 April 2026 05:57:20 +0000 (0:00:00.784) 0:46:22.232 ******** 2026-04-09 05:57:27.089112 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:57:27.089122 | orchestrator | 2026-04-09 05:57:27.089131 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-09 05:57:27.089141 | orchestrator | Thursday 09 April 2026 05:57:21 +0000 (0:00:00.798) 0:46:23.030 ******** 2026-04-09 05:57:27.089150 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:57:27.089160 | orchestrator | 2026-04-09 05:57:27.089170 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-09 05:57:27.089179 | orchestrator | Thursday 09 April 2026 05:57:21 +0000 (0:00:00.779) 0:46:23.810 ******** 2026-04-09 05:57:27.089189 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:57:27.089198 | orchestrator | 2026-04-09 05:57:27.089208 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-09 05:57:27.089218 | orchestrator | Thursday 09 April 2026 05:57:22 +0000 (0:00:00.896) 0:46:24.707 ******** 2026-04-09 05:57:27.089227 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-04-09 05:57:27.089237 | orchestrator | 2026-04-09 05:57:27.089246 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-09 05:57:27.089256 | orchestrator | Thursday 09 April 2026 05:57:26 +0000 (0:00:04.131) 0:46:28.838 ******** 2026-04-09 05:57:27.089272 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 05:58:08.937913 | orchestrator | 2026-04-09 05:58:08.938089 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-09 05:58:08.938109 | orchestrator | Thursday 09 April 2026 05:57:27 +0000 (0:00:00.826) 0:46:29.665 ******** 2026-04-09 05:58:08.938123 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-04-09 05:58:08.938163 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-04-09 05:58:08.938177 | orchestrator | 2026-04-09 05:58:08.938188 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-09 05:58:08.938199 | orchestrator | Thursday 09 April 2026 05:57:34 +0000 (0:00:06.970) 0:46:36.636 ******** 2026-04-09 05:58:08.938211 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:58:08.938223 | orchestrator | 2026-04-09 05:58:08.938234 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-09 05:58:08.938245 | orchestrator | Thursday 09 April 2026 05:57:35 +0000 (0:00:00.768) 0:46:37.404 ******** 2026-04-09 05:58:08.938255 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:58:08.938266 | orchestrator | 2026-04-09 05:58:08.938277 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 05:58:08.938289 | orchestrator | Thursday 09 April 2026 05:57:36 +0000 (0:00:00.800) 0:46:38.205 ******** 2026-04-09 05:58:08.938300 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:58:08.938317 | orchestrator | 2026-04-09 05:58:08.938335 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 05:58:08.938354 | orchestrator | Thursday 09 April 2026 05:57:37 +0000 (0:00:00.807) 0:46:39.013 ******** 2026-04-09 05:58:08.938372 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:58:08.938389 | orchestrator | 2026-04-09 05:58:08.938406 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 05:58:08.938424 | orchestrator | Thursday 09 April 2026 05:57:37 +0000 (0:00:00.795) 0:46:39.809 ******** 2026-04-09 05:58:08.938441 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:58:08.938459 | orchestrator | 2026-04-09 05:58:08.938477 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 05:58:08.938496 | orchestrator | Thursday 09 April 2026 05:57:38 +0000 (0:00:00.829) 0:46:40.638 ******** 2026-04-09 05:58:08.938516 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:58:08.938537 | orchestrator | 2026-04-09 05:58:08.938555 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 05:58:08.938576 | orchestrator | Thursday 09 April 2026 05:57:39 +0000 (0:00:00.869) 0:46:41.508 ******** 2026-04-09 05:58:08.938590 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-09 05:58:08.938604 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-09 05:58:08.938617 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-09 05:58:08.938630 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:58:08.938643 | orchestrator | 2026-04-09 05:58:08.938656 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 05:58:08.938669 | orchestrator | Thursday 09 April 2026 05:57:41 +0000 (0:00:01.498) 0:46:43.007 ******** 2026-04-09 05:58:08.938680 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-09 05:58:08.938691 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-09 05:58:08.938702 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-09 05:58:08.938738 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:58:08.938754 | orchestrator | 2026-04-09 05:58:08.938773 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 05:58:08.938790 | orchestrator | Thursday 09 April 2026 05:57:42 +0000 (0:00:01.551) 0:46:44.558 ******** 2026-04-09 05:58:08.938809 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-09 05:58:08.938849 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-09 05:58:08.938862 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-09 05:58:08.938874 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:58:08.938896 | orchestrator | 2026-04-09 05:58:08.938907 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 05:58:08.938919 | orchestrator | Thursday 09 April 2026 05:57:43 +0000 (0:00:01.063) 0:46:45.621 ******** 2026-04-09 05:58:08.938930 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:58:08.938941 | orchestrator | 2026-04-09 05:58:08.938952 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 05:58:08.938963 | orchestrator | Thursday 09 April 2026 05:57:44 +0000 (0:00:00.860) 0:46:46.482 ******** 2026-04-09 05:58:08.938973 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-09 05:58:08.938987 | orchestrator | 2026-04-09 05:58:08.939006 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-09 05:58:08.939024 | orchestrator | Thursday 09 April 2026 05:57:45 +0000 (0:00:01.042) 0:46:47.525 ******** 2026-04-09 05:58:08.939041 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:58:08.939058 | orchestrator | 2026-04-09 05:58:08.939076 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-09 05:58:08.939094 | orchestrator | Thursday 09 April 2026 05:57:47 +0000 (0:00:01.449) 0:46:48.975 ******** 2026-04-09 05:58:08.939109 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:58:08.939120 | orchestrator | 2026-04-09 05:58:08.939154 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-09 05:58:08.939167 | orchestrator | Thursday 09 April 2026 05:57:47 +0000 (0:00:00.833) 0:46:49.808 ******** 2026-04-09 05:58:08.939178 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 05:58:08.939189 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 05:58:08.939200 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 05:58:08.939211 | orchestrator | 2026-04-09 05:58:08.939221 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-09 05:58:08.939232 | orchestrator | Thursday 09 April 2026 05:57:49 +0000 (0:00:01.695) 0:46:51.504 ******** 2026-04-09 05:58:08.939243 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-5 2026-04-09 05:58:08.939254 | orchestrator | 2026-04-09 05:58:08.939265 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-09 05:58:08.939276 | orchestrator | Thursday 09 April 2026 05:57:50 +0000 (0:00:01.105) 0:46:52.609 ******** 2026-04-09 05:58:08.939287 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:58:08.939298 | orchestrator | 2026-04-09 05:58:08.939308 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-09 05:58:08.939319 | orchestrator | Thursday 09 April 2026 05:57:51 +0000 (0:00:01.146) 0:46:53.756 ******** 2026-04-09 05:58:08.939330 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:58:08.939341 | orchestrator | 2026-04-09 05:58:08.939352 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-09 05:58:08.939363 | orchestrator | Thursday 09 April 2026 05:57:53 +0000 (0:00:01.122) 0:46:54.878 ******** 2026-04-09 05:58:08.939374 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:58:08.939385 | orchestrator | 2026-04-09 05:58:08.939395 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-09 05:58:08.939406 | orchestrator | Thursday 09 April 2026 05:57:54 +0000 (0:00:01.389) 0:46:56.268 ******** 2026-04-09 05:58:08.939417 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:58:08.939428 | orchestrator | 2026-04-09 05:58:08.939439 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-09 05:58:08.939450 | orchestrator | Thursday 09 April 2026 05:57:56 +0000 (0:00:01.662) 0:46:57.931 ******** 2026-04-09 05:58:08.939461 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-09 05:58:08.939472 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-09 05:58:08.939483 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-09 05:58:08.939508 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-09 05:58:08.939526 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-09 05:58:08.939544 | orchestrator | 2026-04-09 05:58:08.939561 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-09 05:58:08.939579 | orchestrator | Thursday 09 April 2026 05:57:58 +0000 (0:00:02.462) 0:47:00.393 ******** 2026-04-09 05:58:08.939597 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:58:08.939614 | orchestrator | 2026-04-09 05:58:08.939634 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-09 05:58:08.939651 | orchestrator | Thursday 09 April 2026 05:57:59 +0000 (0:00:00.786) 0:47:01.179 ******** 2026-04-09 05:58:08.939669 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-5 2026-04-09 05:58:08.939689 | orchestrator | 2026-04-09 05:58:08.939702 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-09 05:58:08.939752 | orchestrator | Thursday 09 April 2026 05:58:00 +0000 (0:00:01.092) 0:47:02.271 ******** 2026-04-09 05:58:08.939766 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-09 05:58:08.939778 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-09 05:58:08.939788 | orchestrator | 2026-04-09 05:58:08.939799 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-09 05:58:08.939811 | orchestrator | Thursday 09 April 2026 05:58:02 +0000 (0:00:01.896) 0:47:04.168 ******** 2026-04-09 05:58:08.939822 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 05:58:08.939832 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-09 05:58:08.939851 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 05:58:08.939863 | orchestrator | 2026-04-09 05:58:08.939874 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-09 05:58:08.939885 | orchestrator | Thursday 09 April 2026 05:58:05 +0000 (0:00:03.208) 0:47:07.376 ******** 2026-04-09 05:58:08.939896 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-09 05:58:08.939907 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-09 05:58:08.939918 | orchestrator | ok: [testbed-node-5] 2026-04-09 05:58:08.939929 | orchestrator | 2026-04-09 05:58:08.939940 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-09 05:58:08.939951 | orchestrator | Thursday 09 April 2026 05:58:07 +0000 (0:00:01.630) 0:47:09.007 ******** 2026-04-09 05:58:08.939961 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:58:08.939972 | orchestrator | 2026-04-09 05:58:08.939983 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-09 05:58:08.939994 | orchestrator | Thursday 09 April 2026 05:58:08 +0000 (0:00:00.875) 0:47:09.883 ******** 2026-04-09 05:58:08.940005 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:58:08.940016 | orchestrator | 2026-04-09 05:58:08.940026 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-09 05:58:08.940038 | orchestrator | Thursday 09 April 2026 05:58:08 +0000 (0:00:00.770) 0:47:10.653 ******** 2026-04-09 05:58:08.940049 | orchestrator | skipping: [testbed-node-5] 2026-04-09 05:58:08.940059 | orchestrator | 2026-04-09 05:58:08.940079 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-09 06:00:30.016067 | orchestrator | Thursday 09 April 2026 05:58:09 +0000 (0:00:00.795) 0:47:11.448 ******** 2026-04-09 06:00:30.016154 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-5 2026-04-09 06:00:30.016164 | orchestrator | 2026-04-09 06:00:30.016170 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-09 06:00:30.016176 | orchestrator | Thursday 09 April 2026 05:58:10 +0000 (0:00:01.151) 0:47:12.600 ******** 2026-04-09 06:00:30.016182 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:00:30.016188 | orchestrator | 2026-04-09 06:00:30.016194 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-09 06:00:30.016216 | orchestrator | Thursday 09 April 2026 05:58:12 +0000 (0:00:01.485) 0:47:14.086 ******** 2026-04-09 06:00:30.016221 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:00:30.016227 | orchestrator | 2026-04-09 06:00:30.016232 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-09 06:00:30.016238 | orchestrator | Thursday 09 April 2026 05:58:15 +0000 (0:00:03.444) 0:47:17.531 ******** 2026-04-09 06:00:30.016243 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-5 2026-04-09 06:00:30.016248 | orchestrator | 2026-04-09 06:00:30.016253 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-09 06:00:30.016259 | orchestrator | Thursday 09 April 2026 05:58:16 +0000 (0:00:01.097) 0:47:18.628 ******** 2026-04-09 06:00:30.016266 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:00:30.016275 | orchestrator | 2026-04-09 06:00:30.016283 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-09 06:00:30.016292 | orchestrator | Thursday 09 April 2026 05:58:18 +0000 (0:00:02.013) 0:47:20.642 ******** 2026-04-09 06:00:30.016300 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:00:30.016308 | orchestrator | 2026-04-09 06:00:30.016317 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-09 06:00:30.016325 | orchestrator | Thursday 09 April 2026 05:58:20 +0000 (0:00:01.930) 0:47:22.572 ******** 2026-04-09 06:00:30.016334 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:00:30.016343 | orchestrator | 2026-04-09 06:00:30.016352 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-09 06:00:30.016358 | orchestrator | Thursday 09 April 2026 05:58:22 +0000 (0:00:02.203) 0:47:24.776 ******** 2026-04-09 06:00:30.016363 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:00:30.016369 | orchestrator | 2026-04-09 06:00:30.016374 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-09 06:00:30.016379 | orchestrator | Thursday 09 April 2026 05:58:24 +0000 (0:00:01.158) 0:47:25.935 ******** 2026-04-09 06:00:30.016385 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:00:30.016390 | orchestrator | 2026-04-09 06:00:30.016395 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-09 06:00:30.016400 | orchestrator | Thursday 09 April 2026 05:58:25 +0000 (0:00:01.179) 0:47:27.115 ******** 2026-04-09 06:00:30.016405 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-09 06:00:30.016410 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-04-09 06:00:30.016416 | orchestrator | 2026-04-09 06:00:30.016421 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-09 06:00:30.016426 | orchestrator | Thursday 09 April 2026 05:58:27 +0000 (0:00:01.857) 0:47:28.972 ******** 2026-04-09 06:00:30.016431 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-09 06:00:30.016436 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-04-09 06:00:30.016442 | orchestrator | 2026-04-09 06:00:30.016447 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-09 06:00:30.016452 | orchestrator | Thursday 09 April 2026 05:58:30 +0000 (0:00:02.901) 0:47:31.874 ******** 2026-04-09 06:00:30.016457 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-09 06:00:30.016462 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-09 06:00:30.016468 | orchestrator | 2026-04-09 06:00:30.016473 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-09 06:00:30.016478 | orchestrator | Thursday 09 April 2026 05:58:34 +0000 (0:00:04.230) 0:47:36.105 ******** 2026-04-09 06:00:30.016483 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:00:30.016488 | orchestrator | 2026-04-09 06:00:30.016493 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-09 06:00:30.016498 | orchestrator | Thursday 09 April 2026 05:58:35 +0000 (0:00:00.911) 0:47:37.016 ******** 2026-04-09 06:00:30.016503 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-09 06:00:30.016509 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-09 06:00:30.016520 | orchestrator | 2026-04-09 06:00:30.016536 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-09 06:00:30.016542 | orchestrator | Thursday 09 April 2026 05:58:48 +0000 (0:00:13.299) 0:47:50.316 ******** 2026-04-09 06:00:30.016547 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:00:30.016552 | orchestrator | 2026-04-09 06:00:30.016558 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-04-09 06:00:30.016563 | orchestrator | Thursday 09 April 2026 05:58:49 +0000 (0:00:00.946) 0:47:51.263 ******** 2026-04-09 06:00:30.016568 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:00:30.016573 | orchestrator | 2026-04-09 06:00:30.016578 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-04-09 06:00:30.016583 | orchestrator | Thursday 09 April 2026 05:58:50 +0000 (0:00:00.804) 0:47:52.067 ******** 2026-04-09 06:00:30.016589 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:00:30.016594 | orchestrator | 2026-04-09 06:00:30.016599 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-04-09 06:00:30.016604 | orchestrator | Thursday 09 April 2026 05:58:50 +0000 (0:00:00.743) 0:47:52.811 ******** 2026-04-09 06:00:30.016609 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-09 06:00:30.016614 | orchestrator | 2026-04-09 06:00:30.016619 | orchestrator | PLAY [Complete osd upgrade] **************************************************** 2026-04-09 06:00:30.016625 | orchestrator | 2026-04-09 06:00:30.016642 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 06:00:30.016649 | orchestrator | Thursday 09 April 2026 05:58:53 +0000 (0:00:02.505) 0:47:55.316 ******** 2026-04-09 06:00:30.016655 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:00:30.016662 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:00:30.016668 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:00:30.016675 | orchestrator | 2026-04-09 06:00:30.016681 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 06:00:30.016688 | orchestrator | Thursday 09 April 2026 05:58:55 +0000 (0:00:01.641) 0:47:56.958 ******** 2026-04-09 06:00:30.016694 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:00:30.016701 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:00:30.016707 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:00:30.016713 | orchestrator | 2026-04-09 06:00:30.016720 | orchestrator | TASK [Re-enable pg autoscale on pools] ***************************************** 2026-04-09 06:00:30.016726 | orchestrator | Thursday 09 April 2026 05:58:56 +0000 (0:00:01.726) 0:47:58.685 ******** 2026-04-09 06:00:30.016733 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-04-09 06:00:30.016762 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-04-09 06:00:30.016769 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-04-09 06:00:30.016776 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-04-09 06:00:30.016784 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-04-09 06:00:30.016790 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-04-09 06:00:30.016797 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-04-09 06:00:30.016803 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-04-09 06:00:30.016809 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-04-09 06:00:30.016815 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-04-09 06:00:30.016827 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-04-09 06:00:30.016833 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-04-09 06:00:30.016840 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-04-09 06:00:30.016846 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-04-09 06:00:30.016853 | orchestrator | 2026-04-09 06:00:30.016859 | orchestrator | TASK [Unset osd flags] ********************************************************* 2026-04-09 06:00:30.016865 | orchestrator | Thursday 09 April 2026 06:00:13 +0000 (0:01:16.639) 0:49:15.324 ******** 2026-04-09 06:00:30.016871 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-04-09 06:00:30.016877 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-04-09 06:00:30.016884 | orchestrator | 2026-04-09 06:00:30.016890 | orchestrator | TASK [Re-enable balancer] ****************************************************** 2026-04-09 06:00:30.016896 | orchestrator | Thursday 09 April 2026 06:00:19 +0000 (0:00:05.557) 0:49:20.882 ******** 2026-04-09 06:00:30.016902 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 06:00:30.016908 | orchestrator | 2026-04-09 06:00:30.016914 | orchestrator | PLAY [Upgrade ceph mdss cluster, deactivate all rank > 0] ********************** 2026-04-09 06:00:30.016921 | orchestrator | 2026-04-09 06:00:30.016927 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 06:00:30.016933 | orchestrator | Thursday 09 April 2026 06:00:22 +0000 (0:00:03.321) 0:49:24.204 ******** 2026-04-09 06:00:30.016939 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-04-09 06:00:30.016946 | orchestrator | 2026-04-09 06:00:30.016956 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-09 06:00:30.016962 | orchestrator | Thursday 09 April 2026 06:00:23 +0000 (0:00:01.129) 0:49:25.333 ******** 2026-04-09 06:00:30.016968 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:00:30.016973 | orchestrator | 2026-04-09 06:00:30.016978 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-09 06:00:30.016983 | orchestrator | Thursday 09 April 2026 06:00:24 +0000 (0:00:01.452) 0:49:26.786 ******** 2026-04-09 06:00:30.016988 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:00:30.016994 | orchestrator | 2026-04-09 06:00:30.016999 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 06:00:30.017004 | orchestrator | Thursday 09 April 2026 06:00:26 +0000 (0:00:01.133) 0:49:27.920 ******** 2026-04-09 06:00:30.017009 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:00:30.017014 | orchestrator | 2026-04-09 06:00:30.017020 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 06:00:30.017025 | orchestrator | Thursday 09 April 2026 06:00:27 +0000 (0:00:01.549) 0:49:29.470 ******** 2026-04-09 06:00:30.017030 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:00:30.017035 | orchestrator | 2026-04-09 06:00:30.017040 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-09 06:00:30.017045 | orchestrator | Thursday 09 April 2026 06:00:28 +0000 (0:00:01.166) 0:49:30.637 ******** 2026-04-09 06:00:30.017051 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:00:30.017056 | orchestrator | 2026-04-09 06:00:30.017061 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-09 06:00:30.017070 | orchestrator | Thursday 09 April 2026 06:00:30 +0000 (0:00:01.239) 0:49:31.877 ******** 2026-04-09 06:00:55.690226 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:00:55.690345 | orchestrator | 2026-04-09 06:00:55.690362 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-09 06:00:55.690376 | orchestrator | Thursday 09 April 2026 06:00:31 +0000 (0:00:01.178) 0:49:33.055 ******** 2026-04-09 06:00:55.690388 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:00:55.690400 | orchestrator | 2026-04-09 06:00:55.690411 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-09 06:00:55.690449 | orchestrator | Thursday 09 April 2026 06:00:32 +0000 (0:00:01.176) 0:49:34.232 ******** 2026-04-09 06:00:55.690461 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:00:55.690472 | orchestrator | 2026-04-09 06:00:55.690483 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-09 06:00:55.690494 | orchestrator | Thursday 09 April 2026 06:00:33 +0000 (0:00:01.140) 0:49:35.372 ******** 2026-04-09 06:00:55.690505 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 06:00:55.690517 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 06:00:55.690528 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 06:00:55.690539 | orchestrator | 2026-04-09 06:00:55.690550 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-09 06:00:55.690560 | orchestrator | Thursday 09 April 2026 06:00:35 +0000 (0:00:01.708) 0:49:37.081 ******** 2026-04-09 06:00:55.690571 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:00:55.690582 | orchestrator | 2026-04-09 06:00:55.690593 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-09 06:00:55.690604 | orchestrator | Thursday 09 April 2026 06:00:36 +0000 (0:00:01.265) 0:49:38.347 ******** 2026-04-09 06:00:55.690615 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 06:00:55.690625 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 06:00:55.690636 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 06:00:55.690647 | orchestrator | 2026-04-09 06:00:55.690658 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-09 06:00:55.690669 | orchestrator | Thursday 09 April 2026 06:00:39 +0000 (0:00:03.280) 0:49:41.627 ******** 2026-04-09 06:00:55.690680 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 06:00:55.690691 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 06:00:55.690702 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 06:00:55.690713 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:00:55.690723 | orchestrator | 2026-04-09 06:00:55.690734 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-09 06:00:55.690776 | orchestrator | Thursday 09 April 2026 06:00:41 +0000 (0:00:01.433) 0:49:43.061 ******** 2026-04-09 06:00:55.690793 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 06:00:55.690810 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 06:00:55.690824 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 06:00:55.690837 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:00:55.690850 | orchestrator | 2026-04-09 06:00:55.690865 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-09 06:00:55.690879 | orchestrator | Thursday 09 April 2026 06:00:43 +0000 (0:00:02.012) 0:49:45.074 ******** 2026-04-09 06:00:55.690910 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 06:00:55.690935 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 06:00:55.690969 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 06:00:55.690983 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:00:55.690997 | orchestrator | 2026-04-09 06:00:55.691010 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-09 06:00:55.691023 | orchestrator | Thursday 09 April 2026 06:00:44 +0000 (0:00:01.262) 0:49:46.336 ******** 2026-04-09 06:00:55.691038 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '69d38aa54653', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-09 06:00:37.010331', 'end': '2026-04-09 06:00:37.085409', 'delta': '0:00:00.075078', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['69d38aa54653'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-09 06:00:55.691053 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '3e7867c40460', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-09 06:00:37.631932', 'end': '2026-04-09 06:00:37.682818', 'delta': '0:00:00.050886', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3e7867c40460'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-09 06:00:55.691065 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '5ed6058fb18c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-09 06:00:38.545177', 'end': '2026-04-09 06:00:38.582262', 'delta': '0:00:00.037085', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5ed6058fb18c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-09 06:00:55.691077 | orchestrator | 2026-04-09 06:00:55.691089 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-09 06:00:55.691100 | orchestrator | Thursday 09 April 2026 06:00:45 +0000 (0:00:01.239) 0:49:47.576 ******** 2026-04-09 06:00:55.691111 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:00:55.691122 | orchestrator | 2026-04-09 06:00:55.691133 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-09 06:00:55.691144 | orchestrator | Thursday 09 April 2026 06:00:47 +0000 (0:00:01.725) 0:49:49.302 ******** 2026-04-09 06:00:55.691161 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:00:55.691172 | orchestrator | 2026-04-09 06:00:55.691189 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-09 06:00:55.691200 | orchestrator | Thursday 09 April 2026 06:00:48 +0000 (0:00:01.275) 0:49:50.577 ******** 2026-04-09 06:00:55.691211 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:00:55.691222 | orchestrator | 2026-04-09 06:00:55.691233 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-09 06:00:55.691244 | orchestrator | Thursday 09 April 2026 06:00:49 +0000 (0:00:01.153) 0:49:51.731 ******** 2026-04-09 06:00:55.691255 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:00:55.691266 | orchestrator | 2026-04-09 06:00:55.691277 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 06:00:55.691288 | orchestrator | Thursday 09 April 2026 06:00:52 +0000 (0:00:02.203) 0:49:53.935 ******** 2026-04-09 06:00:55.691299 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:00:55.691310 | orchestrator | 2026-04-09 06:00:55.691321 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-09 06:00:55.691332 | orchestrator | Thursday 09 April 2026 06:00:53 +0000 (0:00:01.189) 0:49:55.125 ******** 2026-04-09 06:00:55.691343 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:00:55.691354 | orchestrator | 2026-04-09 06:00:55.691365 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-09 06:00:55.691376 | orchestrator | Thursday 09 April 2026 06:00:54 +0000 (0:00:01.181) 0:49:56.307 ******** 2026-04-09 06:00:55.691387 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:00:55.691398 | orchestrator | 2026-04-09 06:00:55.691408 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 06:00:55.691426 | orchestrator | Thursday 09 April 2026 06:00:55 +0000 (0:00:01.243) 0:49:57.550 ******** 2026-04-09 06:01:05.200681 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:01:05.200851 | orchestrator | 2026-04-09 06:01:05.200872 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-09 06:01:05.200885 | orchestrator | Thursday 09 April 2026 06:00:56 +0000 (0:00:01.131) 0:49:58.682 ******** 2026-04-09 06:01:05.200895 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:01:05.200905 | orchestrator | 2026-04-09 06:01:05.200931 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-09 06:01:05.200951 | orchestrator | Thursday 09 April 2026 06:00:57 +0000 (0:00:01.116) 0:49:59.798 ******** 2026-04-09 06:01:05.200962 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:01:05.200972 | orchestrator | 2026-04-09 06:01:05.200982 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-09 06:01:05.200992 | orchestrator | Thursday 09 April 2026 06:00:59 +0000 (0:00:01.151) 0:50:00.950 ******** 2026-04-09 06:01:05.201002 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:01:05.201012 | orchestrator | 2026-04-09 06:01:05.201023 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-09 06:01:05.201033 | orchestrator | Thursday 09 April 2026 06:01:00 +0000 (0:00:01.163) 0:50:02.113 ******** 2026-04-09 06:01:05.201042 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:01:05.201052 | orchestrator | 2026-04-09 06:01:05.201062 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-09 06:01:05.201072 | orchestrator | Thursday 09 April 2026 06:01:01 +0000 (0:00:01.138) 0:50:03.251 ******** 2026-04-09 06:01:05.201082 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:01:05.201092 | orchestrator | 2026-04-09 06:01:05.201102 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-09 06:01:05.201113 | orchestrator | Thursday 09 April 2026 06:01:02 +0000 (0:00:01.143) 0:50:04.395 ******** 2026-04-09 06:01:05.201123 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:01:05.201133 | orchestrator | 2026-04-09 06:01:05.201143 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-09 06:01:05.201153 | orchestrator | Thursday 09 April 2026 06:01:03 +0000 (0:00:01.287) 0:50:05.682 ******** 2026-04-09 06:01:05.201188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:01:05.201202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:01:05.201216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:01:05.201245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-17-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-09 06:01:05.201260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:01:05.201290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:01:05.201303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:01:05.201319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '78f51fbd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part16', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part14', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part15', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part1', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 06:01:05.201343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:01:05.201362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:01:05.201374 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:01:05.201387 | orchestrator | 2026-04-09 06:01:05.201400 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-09 06:01:05.201418 | orchestrator | Thursday 09 April 2026 06:01:05 +0000 (0:00:01.314) 0:50:06.997 ******** 2026-04-09 06:01:05.201445 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:01:10.645841 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:01:10.645943 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:01:10.645983 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-17-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:01:10.645998 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:01:10.646066 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:01:10.646082 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:01:10.646118 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '78f51fbd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part16', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part14', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part15', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part1', 'scsi-SQEMU_QEMU_HARDDISK_78f51fbd-2480-484a-bf4e-21c2c989255f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:01:10.646142 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:01:10.646160 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:01:10.646173 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:01:10.646186 | orchestrator | 2026-04-09 06:01:10.646200 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-09 06:01:10.646213 | orchestrator | Thursday 09 April 2026 06:01:06 +0000 (0:00:01.307) 0:50:08.305 ******** 2026-04-09 06:01:10.646225 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:01:10.646237 | orchestrator | 2026-04-09 06:01:10.646248 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-09 06:01:10.646259 | orchestrator | Thursday 09 April 2026 06:01:07 +0000 (0:00:01.550) 0:50:09.856 ******** 2026-04-09 06:01:10.646270 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:01:10.646281 | orchestrator | 2026-04-09 06:01:10.646292 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 06:01:10.646303 | orchestrator | Thursday 09 April 2026 06:01:09 +0000 (0:00:01.165) 0:50:11.021 ******** 2026-04-09 06:01:10.646314 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:01:10.646325 | orchestrator | 2026-04-09 06:01:10.646336 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 06:01:10.646354 | orchestrator | Thursday 09 April 2026 06:01:10 +0000 (0:00:01.489) 0:50:12.511 ******** 2026-04-09 06:02:04.139903 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:02:04.140050 | orchestrator | 2026-04-09 06:02:04.140069 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 06:02:04.140082 | orchestrator | Thursday 09 April 2026 06:01:11 +0000 (0:00:01.144) 0:50:13.656 ******** 2026-04-09 06:02:04.140093 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:02:04.140104 | orchestrator | 2026-04-09 06:02:04.140116 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 06:02:04.140127 | orchestrator | Thursday 09 April 2026 06:01:13 +0000 (0:00:01.304) 0:50:14.961 ******** 2026-04-09 06:02:04.140138 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:02:04.140149 | orchestrator | 2026-04-09 06:02:04.140160 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 06:02:04.140171 | orchestrator | Thursday 09 April 2026 06:01:14 +0000 (0:00:01.177) 0:50:16.138 ******** 2026-04-09 06:02:04.140182 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 06:02:04.140193 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-09 06:02:04.140204 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-09 06:02:04.140215 | orchestrator | 2026-04-09 06:02:04.140226 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 06:02:04.140237 | orchestrator | Thursday 09 April 2026 06:01:16 +0000 (0:00:02.070) 0:50:18.209 ******** 2026-04-09 06:02:04.140248 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 06:02:04.140260 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 06:02:04.140270 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 06:02:04.140282 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:02:04.140292 | orchestrator | 2026-04-09 06:02:04.140303 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-09 06:02:04.140314 | orchestrator | Thursday 09 April 2026 06:01:17 +0000 (0:00:01.235) 0:50:19.445 ******** 2026-04-09 06:02:04.140325 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:02:04.140335 | orchestrator | 2026-04-09 06:02:04.140346 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-09 06:02:04.140357 | orchestrator | Thursday 09 April 2026 06:01:18 +0000 (0:00:01.127) 0:50:20.572 ******** 2026-04-09 06:02:04.140367 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 06:02:04.140379 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 06:02:04.140393 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 06:02:04.140413 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 06:02:04.140433 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 06:02:04.140452 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 06:02:04.140473 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 06:02:04.140494 | orchestrator | 2026-04-09 06:02:04.140515 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-09 06:02:04.140536 | orchestrator | Thursday 09 April 2026 06:01:20 +0000 (0:00:02.177) 0:50:22.750 ******** 2026-04-09 06:02:04.140550 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 06:02:04.140564 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 06:02:04.140577 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 06:02:04.140591 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 06:02:04.140604 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 06:02:04.140617 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 06:02:04.140644 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 06:02:04.140668 | orchestrator | 2026-04-09 06:02:04.140682 | orchestrator | TASK [Set max_mds 1 on ceph fs] ************************************************ 2026-04-09 06:02:04.140696 | orchestrator | Thursday 09 April 2026 06:01:23 +0000 (0:00:03.053) 0:50:25.804 ******** 2026-04-09 06:02:04.140710 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:02:04.140724 | orchestrator | 2026-04-09 06:02:04.140739 | orchestrator | TASK [Wait until only rank 0 is up] ******************************************** 2026-04-09 06:02:04.140750 | orchestrator | Thursday 09 April 2026 06:01:27 +0000 (0:00:03.294) 0:50:29.098 ******** 2026-04-09 06:02:04.140829 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:02:04.140842 | orchestrator | 2026-04-09 06:02:04.140853 | orchestrator | TASK [Get name of remaining active mds] **************************************** 2026-04-09 06:02:04.140864 | orchestrator | Thursday 09 April 2026 06:01:30 +0000 (0:00:03.077) 0:50:32.175 ******** 2026-04-09 06:02:04.140875 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:02:04.140886 | orchestrator | 2026-04-09 06:02:04.140897 | orchestrator | TASK [Set_fact mds_active_name] ************************************************ 2026-04-09 06:02:04.140908 | orchestrator | Thursday 09 April 2026 06:01:32 +0000 (0:00:02.166) 0:50:34.342 ******** 2026-04-09 06:02:04.140942 | orchestrator | ok: [testbed-node-0] => (item={'key': 'gid_4697', 'value': {'gid': 4697, 'name': 'testbed-node-4', 'rank': 0, 'incarnation': 7, 'state': 'up:active', 'state_seq': 1266, 'addr': '192.168.16.14:6817/120999886', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.16.14:6816', 'nonce': 120999886}, {'type': 'v1', 'addr': '192.168.16.14:6817', 'nonce': 120999886}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138322906710015, 'flags': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_7': 'mds uses inline data', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}}}) 2026-04-09 06:02:04.140958 | orchestrator | 2026-04-09 06:02:04.140969 | orchestrator | TASK [Set_fact mds_active_host] ************************************************ 2026-04-09 06:02:04.140981 | orchestrator | Thursday 09 April 2026 06:01:33 +0000 (0:00:01.352) 0:50:35.695 ******** 2026-04-09 06:02:04.140992 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-09 06:02:04.141003 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-4) 2026-04-09 06:02:04.141014 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-09 06:02:04.141025 | orchestrator | 2026-04-09 06:02:04.141036 | orchestrator | TASK [Create standby_mdss group] *********************************************** 2026-04-09 06:02:04.141048 | orchestrator | Thursday 09 April 2026 06:01:35 +0000 (0:00:01.626) 0:50:37.321 ******** 2026-04-09 06:02:04.141059 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-3) 2026-04-09 06:02:04.141070 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-5) 2026-04-09 06:02:04.141082 | orchestrator | 2026-04-09 06:02:04.141093 | orchestrator | TASK [Stop standby ceph mds] *************************************************** 2026-04-09 06:02:04.141104 | orchestrator | Thursday 09 April 2026 06:01:36 +0000 (0:00:01.460) 0:50:38.782 ******** 2026-04-09 06:02:04.141115 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 06:02:04.141126 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 06:02:04.141137 | orchestrator | 2026-04-09 06:02:04.141148 | orchestrator | TASK [Mask systemd units for standby ceph mds] ********************************* 2026-04-09 06:02:04.141159 | orchestrator | Thursday 09 April 2026 06:01:46 +0000 (0:00:09.558) 0:50:48.341 ******** 2026-04-09 06:02:04.141170 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 06:02:04.141181 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 06:02:04.141201 | orchestrator | 2026-04-09 06:02:04.141212 | orchestrator | TASK [Wait until all standbys mds are stopped] ********************************* 2026-04-09 06:02:04.141223 | orchestrator | Thursday 09 April 2026 06:01:50 +0000 (0:00:03.864) 0:50:52.205 ******** 2026-04-09 06:02:04.141234 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:02:04.141245 | orchestrator | 2026-04-09 06:02:04.141256 | orchestrator | TASK [Create active_mdss group] ************************************************ 2026-04-09 06:02:04.141267 | orchestrator | Thursday 09 April 2026 06:01:52 +0000 (0:00:02.206) 0:50:54.412 ******** 2026-04-09 06:02:04.141279 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:02:04.141290 | orchestrator | 2026-04-09 06:02:04.141301 | orchestrator | PLAY [Upgrade active mds] ****************************************************** 2026-04-09 06:02:04.141312 | orchestrator | 2026-04-09 06:02:04.141323 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 06:02:04.141334 | orchestrator | Thursday 09 April 2026 06:01:54 +0000 (0:00:01.493) 0:50:55.905 ******** 2026-04-09 06:02:04.141345 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-04-09 06:02:04.141356 | orchestrator | 2026-04-09 06:02:04.141367 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-09 06:02:04.141378 | orchestrator | Thursday 09 April 2026 06:01:55 +0000 (0:00:01.371) 0:50:57.277 ******** 2026-04-09 06:02:04.141389 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:02:04.141400 | orchestrator | 2026-04-09 06:02:04.141411 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-09 06:02:04.141429 | orchestrator | Thursday 09 April 2026 06:01:56 +0000 (0:00:01.440) 0:50:58.717 ******** 2026-04-09 06:02:04.141441 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:02:04.141458 | orchestrator | 2026-04-09 06:02:04.141478 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 06:02:04.141496 | orchestrator | Thursday 09 April 2026 06:01:57 +0000 (0:00:01.118) 0:50:59.836 ******** 2026-04-09 06:02:04.141515 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:02:04.141536 | orchestrator | 2026-04-09 06:02:04.141556 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 06:02:04.141575 | orchestrator | Thursday 09 April 2026 06:01:59 +0000 (0:00:01.446) 0:51:01.283 ******** 2026-04-09 06:02:04.141588 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:02:04.141599 | orchestrator | 2026-04-09 06:02:04.141610 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-09 06:02:04.141622 | orchestrator | Thursday 09 April 2026 06:02:00 +0000 (0:00:01.151) 0:51:02.434 ******** 2026-04-09 06:02:04.141633 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:02:04.141644 | orchestrator | 2026-04-09 06:02:04.141655 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-09 06:02:04.141666 | orchestrator | Thursday 09 April 2026 06:02:01 +0000 (0:00:01.119) 0:51:03.553 ******** 2026-04-09 06:02:04.141677 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:02:04.141687 | orchestrator | 2026-04-09 06:02:04.141698 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-09 06:02:04.141710 | orchestrator | Thursday 09 April 2026 06:02:02 +0000 (0:00:01.156) 0:51:04.710 ******** 2026-04-09 06:02:04.141721 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:02:04.141732 | orchestrator | 2026-04-09 06:02:04.141742 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-09 06:02:04.141776 | orchestrator | Thursday 09 April 2026 06:02:03 +0000 (0:00:01.138) 0:51:05.849 ******** 2026-04-09 06:02:04.141796 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:02:04.141808 | orchestrator | 2026-04-09 06:02:04.141828 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-09 06:02:29.885500 | orchestrator | Thursday 09 April 2026 06:02:05 +0000 (0:00:01.152) 0:51:07.001 ******** 2026-04-09 06:02:29.885611 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 06:02:29.885627 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 06:02:29.885665 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 06:02:29.885677 | orchestrator | 2026-04-09 06:02:29.885689 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-09 06:02:29.885701 | orchestrator | Thursday 09 April 2026 06:02:07 +0000 (0:00:02.036) 0:51:09.038 ******** 2026-04-09 06:02:29.885712 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:02:29.885724 | orchestrator | 2026-04-09 06:02:29.885735 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-09 06:02:29.885747 | orchestrator | Thursday 09 April 2026 06:02:08 +0000 (0:00:01.284) 0:51:10.323 ******** 2026-04-09 06:02:29.885758 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 06:02:29.885838 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 06:02:29.885849 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 06:02:29.885860 | orchestrator | 2026-04-09 06:02:29.885872 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-09 06:02:29.885883 | orchestrator | Thursday 09 April 2026 06:02:11 +0000 (0:00:03.251) 0:51:13.575 ******** 2026-04-09 06:02:29.885895 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-09 06:02:29.885907 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-09 06:02:29.885918 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-09 06:02:29.885929 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:02:29.885940 | orchestrator | 2026-04-09 06:02:29.885951 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-09 06:02:29.885963 | orchestrator | Thursday 09 April 2026 06:02:13 +0000 (0:00:01.825) 0:51:15.400 ******** 2026-04-09 06:02:29.885976 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 06:02:29.885990 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 06:02:29.886002 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 06:02:29.886013 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:02:29.886087 | orchestrator | 2026-04-09 06:02:29.886102 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-09 06:02:29.886115 | orchestrator | Thursday 09 April 2026 06:02:15 +0000 (0:00:02.093) 0:51:17.493 ******** 2026-04-09 06:02:29.886145 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 06:02:29.886163 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 06:02:29.886177 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 06:02:29.886200 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:02:29.886214 | orchestrator | 2026-04-09 06:02:29.886227 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-09 06:02:29.886241 | orchestrator | Thursday 09 April 2026 06:02:16 +0000 (0:00:01.230) 0:51:18.724 ******** 2026-04-09 06:02:29.886277 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '69d38aa54653', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-09 06:02:09.363594', 'end': '2026-04-09 06:02:09.405243', 'delta': '0:00:00.041649', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['69d38aa54653'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-09 06:02:29.886293 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '3e7867c40460', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-09 06:02:09.925024', 'end': '2026-04-09 06:02:09.967596', 'delta': '0:00:00.042572', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3e7867c40460'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-09 06:02:29.886307 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '5ed6058fb18c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-09 06:02:10.515656', 'end': '2026-04-09 06:02:10.562243', 'delta': '0:00:00.046587', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5ed6058fb18c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-09 06:02:29.886320 | orchestrator | 2026-04-09 06:02:29.886334 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-09 06:02:29.886351 | orchestrator | Thursday 09 April 2026 06:02:18 +0000 (0:00:01.241) 0:51:19.965 ******** 2026-04-09 06:02:29.886366 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:02:29.886379 | orchestrator | 2026-04-09 06:02:29.886393 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-09 06:02:29.886404 | orchestrator | Thursday 09 April 2026 06:02:19 +0000 (0:00:01.314) 0:51:21.279 ******** 2026-04-09 06:02:29.886416 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:02:29.886427 | orchestrator | 2026-04-09 06:02:29.886438 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-09 06:02:29.886449 | orchestrator | Thursday 09 April 2026 06:02:20 +0000 (0:00:01.268) 0:51:22.548 ******** 2026-04-09 06:02:29.886460 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:02:29.886471 | orchestrator | 2026-04-09 06:02:29.886482 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-09 06:02:29.886506 | orchestrator | Thursday 09 April 2026 06:02:21 +0000 (0:00:01.166) 0:51:23.715 ******** 2026-04-09 06:02:29.886517 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-09 06:02:29.886529 | orchestrator | 2026-04-09 06:02:29.886540 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 06:02:29.886551 | orchestrator | Thursday 09 April 2026 06:02:23 +0000 (0:00:02.122) 0:51:25.838 ******** 2026-04-09 06:02:29.886562 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:02:29.886573 | orchestrator | 2026-04-09 06:02:29.886584 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-09 06:02:29.886596 | orchestrator | Thursday 09 April 2026 06:02:25 +0000 (0:00:01.179) 0:51:27.017 ******** 2026-04-09 06:02:29.886607 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:02:29.886618 | orchestrator | 2026-04-09 06:02:29.886629 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-09 06:02:29.886640 | orchestrator | Thursday 09 April 2026 06:02:26 +0000 (0:00:01.112) 0:51:28.130 ******** 2026-04-09 06:02:29.886651 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:02:29.886662 | orchestrator | 2026-04-09 06:02:29.886673 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 06:02:29.886684 | orchestrator | Thursday 09 April 2026 06:02:27 +0000 (0:00:01.310) 0:51:29.440 ******** 2026-04-09 06:02:29.886695 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:02:29.886706 | orchestrator | 2026-04-09 06:02:29.886717 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-09 06:02:29.886728 | orchestrator | Thursday 09 April 2026 06:02:28 +0000 (0:00:01.107) 0:51:30.547 ******** 2026-04-09 06:02:29.886739 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:02:29.886750 | orchestrator | 2026-04-09 06:02:29.886783 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-09 06:02:29.886796 | orchestrator | Thursday 09 April 2026 06:02:29 +0000 (0:00:01.109) 0:51:31.657 ******** 2026-04-09 06:02:29.886815 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:02:35.804336 | orchestrator | 2026-04-09 06:02:35.804451 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-09 06:02:35.804470 | orchestrator | Thursday 09 April 2026 06:02:30 +0000 (0:00:01.154) 0:51:32.811 ******** 2026-04-09 06:02:35.804483 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:02:35.804496 | orchestrator | 2026-04-09 06:02:35.804508 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-09 06:02:35.804519 | orchestrator | Thursday 09 April 2026 06:02:32 +0000 (0:00:01.184) 0:51:33.996 ******** 2026-04-09 06:02:35.804531 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:02:35.804543 | orchestrator | 2026-04-09 06:02:35.804554 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-09 06:02:35.804572 | orchestrator | Thursday 09 April 2026 06:02:33 +0000 (0:00:01.170) 0:51:35.167 ******** 2026-04-09 06:02:35.804592 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:02:35.804609 | orchestrator | 2026-04-09 06:02:35.804628 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-09 06:02:35.804647 | orchestrator | Thursday 09 April 2026 06:02:34 +0000 (0:00:01.131) 0:51:36.299 ******** 2026-04-09 06:02:35.804665 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:02:35.804682 | orchestrator | 2026-04-09 06:02:35.804702 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-09 06:02:35.804721 | orchestrator | Thursday 09 April 2026 06:02:35 +0000 (0:00:01.137) 0:51:37.436 ******** 2026-04-09 06:02:35.804745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:02:35.804839 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9961abb4--5e3b--57c6--b852--cf206941d3b6-osd--block--9961abb4--5e3b--57c6--b852--cf206941d3b6', 'dm-uuid-LVM-VSFf3ccEgp0wKL5jhz806Y1ipsT5ObrjT6G1Wcak0n2a6HcA1XF9dc3OSaf3QiXy'], 'uuids': ['c5a762f6-19fc-430f-b395-3c5066cc9fcd'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ecc4ee99', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['T6G1Wc-ak0n-2a6H-cA1X-F9dc-3OSa-f3QiXy']}})  2026-04-09 06:02:35.804886 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105', 'scsi-SQEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '60e6f74a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 06:02:35.804920 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-HoNasn-5QxW-KcVM-fZxm-N33I-s5zp-OiEUUv', 'scsi-0QEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74', 'scsi-SQEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91609add', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--68e90870--4763--57e7--8e76--63c40a6d6d6f-osd--block--68e90870--4763--57e7--8e76--63c40a6d6d6f']}})  2026-04-09 06:02:35.804935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:02:35.804969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:02:35.804984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-09 06:02:35.804999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:02:35.805022 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-KSROHC-dR1c-0EeK-RnCg-2dSk-lYcb-Dl2pDV', 'dm-uuid-CRYPT-LUKS2-952a49d36c2646fe9329a26e5adefe63-KSROHC-dR1c-0EeK-RnCg-2dSk-lYcb-Dl2pDV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-09 06:02:35.805037 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:02:35.805057 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--68e90870--4763--57e7--8e76--63c40a6d6d6f-osd--block--68e90870--4763--57e7--8e76--63c40a6d6d6f', 'dm-uuid-LVM-T0ejETzGFepEIpL08MD1OeW49dsnvI7wKSROHCdR1c0EeKRnCg2dSklYcbDl2pDV'], 'uuids': ['952a49d3-6c26-46fe-9329-a26e5adefe63'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '91609add', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['KSROHC-dR1c-0EeK-RnCg-2dSk-lYcb-Dl2pDV']}})  2026-04-09 06:02:35.805078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Q1Gc3L-FdZu-zuWJ-nsAv-cjJy-ZUop-F84tN5', 'scsi-0QEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf', 'scsi-SQEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ecc4ee99', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9961abb4--5e3b--57c6--b852--cf206941d3b6-osd--block--9961abb4--5e3b--57c6--b852--cf206941d3b6']}})  2026-04-09 06:02:35.805110 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:02:37.162456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9009f97f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part16', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part14', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part15', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part1', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 06:02:37.162590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:02:37.162624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:02:37.162638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-T6G1Wc-ak0n-2a6H-cA1X-F9dc-3OSa-f3QiXy', 'dm-uuid-CRYPT-LUKS2-c5a762f619fc430fb3953c5066cc9fcd-T6G1Wc-ak0n-2a6H-cA1X-F9dc-3OSa-f3QiXy'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-09 06:02:37.162651 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:02:37.162664 | orchestrator | 2026-04-09 06:02:37.162676 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-09 06:02:37.162688 | orchestrator | Thursday 09 April 2026 06:02:37 +0000 (0:00:01.436) 0:51:38.874 ******** 2026-04-09 06:02:37.162719 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:02:37.162732 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9961abb4--5e3b--57c6--b852--cf206941d3b6-osd--block--9961abb4--5e3b--57c6--b852--cf206941d3b6', 'dm-uuid-LVM-VSFf3ccEgp0wKL5jhz806Y1ipsT5ObrjT6G1Wcak0n2a6HcA1XF9dc3OSaf3QiXy'], 'uuids': ['c5a762f6-19fc-430f-b395-3c5066cc9fcd'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ecc4ee99', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['T6G1Wc-ak0n-2a6H-cA1X-F9dc-3OSa-f3QiXy']}}, 'ansible_loop_var': 'item'})  2026-04-09 06:02:37.162753 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105', 'scsi-SQEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '60e6f74a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:02:37.162819 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-HoNasn-5QxW-KcVM-fZxm-N33I-s5zp-OiEUUv', 'scsi-0QEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74', 'scsi-SQEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91609add', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--68e90870--4763--57e7--8e76--63c40a6d6d6f-osd--block--68e90870--4763--57e7--8e76--63c40a6d6d6f']}}, 'ansible_loop_var': 'item'})  2026-04-09 06:02:37.162834 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:02:37.162853 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:02:37.278125 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:02:37.278277 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:02:37.278303 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-KSROHC-dR1c-0EeK-RnCg-2dSk-lYcb-Dl2pDV', 'dm-uuid-CRYPT-LUKS2-952a49d36c2646fe9329a26e5adefe63-KSROHC-dR1c-0EeK-RnCg-2dSk-lYcb-Dl2pDV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:02:37.278340 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:02:37.278361 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--68e90870--4763--57e7--8e76--63c40a6d6d6f-osd--block--68e90870--4763--57e7--8e76--63c40a6d6d6f', 'dm-uuid-LVM-T0ejETzGFepEIpL08MD1OeW49dsnvI7wKSROHCdR1c0EeKRnCg2dSklYcbDl2pDV'], 'uuids': ['952a49d3-6c26-46fe-9329-a26e5adefe63'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '91609add', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['KSROHC-dR1c-0EeK-RnCg-2dSk-lYcb-Dl2pDV']}}, 'ansible_loop_var': 'item'})  2026-04-09 06:02:37.278404 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Q1Gc3L-FdZu-zuWJ-nsAv-cjJy-ZUop-F84tN5', 'scsi-0QEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf', 'scsi-SQEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ecc4ee99', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9961abb4--5e3b--57c6--b852--cf206941d3b6-osd--block--9961abb4--5e3b--57c6--b852--cf206941d3b6']}}, 'ansible_loop_var': 'item'})  2026-04-09 06:02:37.278438 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:02:37.278468 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9009f97f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part16', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part14', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part15', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part1', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:02:37.278489 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:02:37.278517 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:03:13.068424 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-T6G1Wc-ak0n-2a6H-cA1X-F9dc-3OSa-f3QiXy', 'dm-uuid-CRYPT-LUKS2-c5a762f619fc430fb3953c5066cc9fcd-T6G1Wc-ak0n-2a6H-cA1X-F9dc-3OSa-f3QiXy'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:03:13.068541 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:03:13.068561 | orchestrator | 2026-04-09 06:03:13.068574 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-09 06:03:13.068588 | orchestrator | Thursday 09 April 2026 06:02:38 +0000 (0:00:01.421) 0:51:40.295 ******** 2026-04-09 06:03:13.068599 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:03:13.068611 | orchestrator | 2026-04-09 06:03:13.068623 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-09 06:03:13.068634 | orchestrator | Thursday 09 April 2026 06:02:39 +0000 (0:00:01.529) 0:51:41.825 ******** 2026-04-09 06:03:13.068645 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:03:13.068656 | orchestrator | 2026-04-09 06:03:13.068667 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 06:03:13.068678 | orchestrator | Thursday 09 April 2026 06:02:41 +0000 (0:00:01.145) 0:51:42.970 ******** 2026-04-09 06:03:13.068689 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:03:13.068700 | orchestrator | 2026-04-09 06:03:13.068712 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 06:03:13.068723 | orchestrator | Thursday 09 April 2026 06:02:42 +0000 (0:00:01.475) 0:51:44.446 ******** 2026-04-09 06:03:13.068734 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:03:13.068745 | orchestrator | 2026-04-09 06:03:13.068756 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 06:03:13.068767 | orchestrator | Thursday 09 April 2026 06:02:43 +0000 (0:00:01.113) 0:51:45.559 ******** 2026-04-09 06:03:13.068819 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:03:13.068830 | orchestrator | 2026-04-09 06:03:13.068841 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 06:03:13.068870 | orchestrator | Thursday 09 April 2026 06:02:44 +0000 (0:00:01.274) 0:51:46.834 ******** 2026-04-09 06:03:13.068881 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:03:13.068892 | orchestrator | 2026-04-09 06:03:13.068903 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 06:03:13.068915 | orchestrator | Thursday 09 April 2026 06:02:46 +0000 (0:00:01.213) 0:51:48.048 ******** 2026-04-09 06:03:13.068926 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-09 06:03:13.068938 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-09 06:03:13.068952 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-09 06:03:13.068966 | orchestrator | 2026-04-09 06:03:13.068980 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 06:03:13.068994 | orchestrator | Thursday 09 April 2026 06:02:48 +0000 (0:00:02.067) 0:51:50.116 ******** 2026-04-09 06:03:13.069007 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-09 06:03:13.069042 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-09 06:03:13.069056 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-09 06:03:13.069070 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:03:13.069083 | orchestrator | 2026-04-09 06:03:13.069097 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-09 06:03:13.069111 | orchestrator | Thursday 09 April 2026 06:02:49 +0000 (0:00:01.223) 0:51:51.340 ******** 2026-04-09 06:03:13.069124 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-04-09 06:03:13.069138 | orchestrator | 2026-04-09 06:03:13.069153 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 06:03:13.069167 | orchestrator | Thursday 09 April 2026 06:02:50 +0000 (0:00:01.123) 0:51:52.463 ******** 2026-04-09 06:03:13.069180 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:03:13.069194 | orchestrator | 2026-04-09 06:03:13.069207 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 06:03:13.069220 | orchestrator | Thursday 09 April 2026 06:02:51 +0000 (0:00:01.119) 0:51:53.583 ******** 2026-04-09 06:03:13.069233 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:03:13.069247 | orchestrator | 2026-04-09 06:03:13.069260 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 06:03:13.069273 | orchestrator | Thursday 09 April 2026 06:02:52 +0000 (0:00:01.127) 0:51:54.711 ******** 2026-04-09 06:03:13.069287 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:03:13.069300 | orchestrator | 2026-04-09 06:03:13.069311 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 06:03:13.069322 | orchestrator | Thursday 09 April 2026 06:02:53 +0000 (0:00:01.152) 0:51:55.863 ******** 2026-04-09 06:03:13.069333 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:03:13.069344 | orchestrator | 2026-04-09 06:03:13.069355 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 06:03:13.069366 | orchestrator | Thursday 09 April 2026 06:02:55 +0000 (0:00:01.231) 0:51:57.094 ******** 2026-04-09 06:03:13.069377 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-09 06:03:13.069408 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-09 06:03:13.069420 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-09 06:03:13.069431 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:03:13.069442 | orchestrator | 2026-04-09 06:03:13.069453 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 06:03:13.069465 | orchestrator | Thursday 09 April 2026 06:02:56 +0000 (0:00:01.363) 0:51:58.458 ******** 2026-04-09 06:03:13.069475 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-09 06:03:13.069487 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-09 06:03:13.069497 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-09 06:03:13.069512 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:03:13.069531 | orchestrator | 2026-04-09 06:03:13.069549 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 06:03:13.069567 | orchestrator | Thursday 09 April 2026 06:02:58 +0000 (0:00:01.429) 0:51:59.887 ******** 2026-04-09 06:03:13.069584 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-09 06:03:13.069603 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-09 06:03:13.069619 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-09 06:03:13.069636 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:03:13.069653 | orchestrator | 2026-04-09 06:03:13.069669 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 06:03:13.069686 | orchestrator | Thursday 09 April 2026 06:02:59 +0000 (0:00:01.412) 0:52:01.300 ******** 2026-04-09 06:03:13.069705 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:03:13.069723 | orchestrator | 2026-04-09 06:03:13.069742 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 06:03:13.069832 | orchestrator | Thursday 09 April 2026 06:03:00 +0000 (0:00:01.141) 0:52:02.442 ******** 2026-04-09 06:03:13.069855 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-09 06:03:13.069873 | orchestrator | 2026-04-09 06:03:13.069885 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-09 06:03:13.069896 | orchestrator | Thursday 09 April 2026 06:03:01 +0000 (0:00:01.328) 0:52:03.770 ******** 2026-04-09 06:03:13.069907 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 06:03:13.069919 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 06:03:13.069930 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 06:03:13.069941 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 06:03:13.069952 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-09 06:03:13.069971 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 06:03:13.069983 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 06:03:13.069994 | orchestrator | 2026-04-09 06:03:13.070005 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-09 06:03:13.070075 | orchestrator | Thursday 09 April 2026 06:03:04 +0000 (0:00:02.127) 0:52:05.897 ******** 2026-04-09 06:03:13.070088 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 06:03:13.070099 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 06:03:13.070110 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 06:03:13.070121 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 06:03:13.070132 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-09 06:03:13.070144 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 06:03:13.070164 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 06:03:13.070176 | orchestrator | 2026-04-09 06:03:13.070187 | orchestrator | TASK [Prevent restart from the packaging] ************************************** 2026-04-09 06:03:13.070198 | orchestrator | Thursday 09 April 2026 06:03:07 +0000 (0:00:02.978) 0:52:08.876 ******** 2026-04-09 06:03:13.070209 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:03:13.070220 | orchestrator | 2026-04-09 06:03:13.070231 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 06:03:13.070242 | orchestrator | Thursday 09 April 2026 06:03:08 +0000 (0:00:01.115) 0:52:09.991 ******** 2026-04-09 06:03:13.070253 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-04-09 06:03:13.070265 | orchestrator | 2026-04-09 06:03:13.070276 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 06:03:13.070287 | orchestrator | Thursday 09 April 2026 06:03:09 +0000 (0:00:01.138) 0:52:11.129 ******** 2026-04-09 06:03:13.070298 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-04-09 06:03:13.070309 | orchestrator | 2026-04-09 06:03:13.070320 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 06:03:13.070331 | orchestrator | Thursday 09 April 2026 06:03:10 +0000 (0:00:01.121) 0:52:12.251 ******** 2026-04-09 06:03:13.070342 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:03:13.070353 | orchestrator | 2026-04-09 06:03:13.070364 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 06:03:13.070375 | orchestrator | Thursday 09 April 2026 06:03:11 +0000 (0:00:01.138) 0:52:13.389 ******** 2026-04-09 06:03:13.070386 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:03:13.070407 | orchestrator | 2026-04-09 06:03:13.070418 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 06:03:13.070442 | orchestrator | Thursday 09 April 2026 06:03:13 +0000 (0:00:01.537) 0:52:14.926 ******** 2026-04-09 06:04:03.685259 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:04:03.685377 | orchestrator | 2026-04-09 06:04:03.685395 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 06:04:03.685409 | orchestrator | Thursday 09 April 2026 06:03:14 +0000 (0:00:01.540) 0:52:16.467 ******** 2026-04-09 06:04:03.685421 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:04:03.685432 | orchestrator | 2026-04-09 06:04:03.685443 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 06:04:03.685455 | orchestrator | Thursday 09 April 2026 06:03:16 +0000 (0:00:01.545) 0:52:18.013 ******** 2026-04-09 06:04:03.685467 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:03.685479 | orchestrator | 2026-04-09 06:04:03.685490 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 06:04:03.685502 | orchestrator | Thursday 09 April 2026 06:03:17 +0000 (0:00:01.152) 0:52:19.165 ******** 2026-04-09 06:04:03.685518 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:03.685538 | orchestrator | 2026-04-09 06:04:03.685551 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 06:04:03.685562 | orchestrator | Thursday 09 April 2026 06:03:18 +0000 (0:00:01.129) 0:52:20.295 ******** 2026-04-09 06:04:03.685573 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:03.685584 | orchestrator | 2026-04-09 06:04:03.685595 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 06:04:03.685606 | orchestrator | Thursday 09 April 2026 06:03:19 +0000 (0:00:01.155) 0:52:21.450 ******** 2026-04-09 06:04:03.685617 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:04:03.685629 | orchestrator | 2026-04-09 06:04:03.685640 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 06:04:03.685651 | orchestrator | Thursday 09 April 2026 06:03:21 +0000 (0:00:01.522) 0:52:22.973 ******** 2026-04-09 06:04:03.685662 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:04:03.685673 | orchestrator | 2026-04-09 06:04:03.685685 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 06:04:03.685695 | orchestrator | Thursday 09 April 2026 06:03:22 +0000 (0:00:01.505) 0:52:24.478 ******** 2026-04-09 06:04:03.685706 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:03.685717 | orchestrator | 2026-04-09 06:04:03.685728 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 06:04:03.685739 | orchestrator | Thursday 09 April 2026 06:03:23 +0000 (0:00:01.136) 0:52:25.615 ******** 2026-04-09 06:04:03.685750 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:03.685762 | orchestrator | 2026-04-09 06:04:03.685773 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 06:04:03.685814 | orchestrator | Thursday 09 April 2026 06:03:24 +0000 (0:00:01.150) 0:52:26.765 ******** 2026-04-09 06:04:03.685829 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:04:03.685843 | orchestrator | 2026-04-09 06:04:03.685872 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 06:04:03.685888 | orchestrator | Thursday 09 April 2026 06:03:26 +0000 (0:00:01.120) 0:52:27.886 ******** 2026-04-09 06:04:03.685902 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:04:03.685916 | orchestrator | 2026-04-09 06:04:03.685929 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 06:04:03.685943 | orchestrator | Thursday 09 April 2026 06:03:27 +0000 (0:00:01.160) 0:52:29.047 ******** 2026-04-09 06:04:03.685957 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:04:03.685970 | orchestrator | 2026-04-09 06:04:03.685985 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 06:04:03.685999 | orchestrator | Thursday 09 April 2026 06:03:28 +0000 (0:00:01.171) 0:52:30.219 ******** 2026-04-09 06:04:03.686010 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:03.686087 | orchestrator | 2026-04-09 06:04:03.686122 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 06:04:03.686134 | orchestrator | Thursday 09 April 2026 06:03:29 +0000 (0:00:01.114) 0:52:31.333 ******** 2026-04-09 06:04:03.686146 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:03.686157 | orchestrator | 2026-04-09 06:04:03.686168 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 06:04:03.686179 | orchestrator | Thursday 09 April 2026 06:03:30 +0000 (0:00:01.098) 0:52:32.432 ******** 2026-04-09 06:04:03.686190 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:03.686202 | orchestrator | 2026-04-09 06:04:03.686213 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 06:04:03.686225 | orchestrator | Thursday 09 April 2026 06:03:31 +0000 (0:00:01.156) 0:52:33.588 ******** 2026-04-09 06:04:03.686236 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:04:03.686247 | orchestrator | 2026-04-09 06:04:03.686258 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 06:04:03.686270 | orchestrator | Thursday 09 April 2026 06:03:32 +0000 (0:00:01.238) 0:52:34.827 ******** 2026-04-09 06:04:03.686281 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:04:03.686292 | orchestrator | 2026-04-09 06:04:03.686304 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-09 06:04:03.686315 | orchestrator | Thursday 09 April 2026 06:03:34 +0000 (0:00:01.161) 0:52:35.989 ******** 2026-04-09 06:04:03.686326 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:03.686337 | orchestrator | 2026-04-09 06:04:03.686348 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-09 06:04:03.686359 | orchestrator | Thursday 09 April 2026 06:03:35 +0000 (0:00:01.221) 0:52:37.210 ******** 2026-04-09 06:04:03.686370 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:03.686381 | orchestrator | 2026-04-09 06:04:03.686392 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-09 06:04:03.686404 | orchestrator | Thursday 09 April 2026 06:03:36 +0000 (0:00:01.237) 0:52:38.448 ******** 2026-04-09 06:04:03.686415 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:03.686426 | orchestrator | 2026-04-09 06:04:03.686437 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-09 06:04:03.686448 | orchestrator | Thursday 09 April 2026 06:03:37 +0000 (0:00:01.109) 0:52:39.558 ******** 2026-04-09 06:04:03.686459 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:03.686471 | orchestrator | 2026-04-09 06:04:03.686482 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-09 06:04:03.686513 | orchestrator | Thursday 09 April 2026 06:03:38 +0000 (0:00:01.161) 0:52:40.720 ******** 2026-04-09 06:04:03.686525 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:03.686536 | orchestrator | 2026-04-09 06:04:03.686547 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-09 06:04:03.686558 | orchestrator | Thursday 09 April 2026 06:03:39 +0000 (0:00:01.138) 0:52:41.858 ******** 2026-04-09 06:04:03.686569 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:03.686580 | orchestrator | 2026-04-09 06:04:03.686591 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-09 06:04:03.686602 | orchestrator | Thursday 09 April 2026 06:03:41 +0000 (0:00:01.149) 0:52:43.008 ******** 2026-04-09 06:04:03.686613 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:03.686625 | orchestrator | 2026-04-09 06:04:03.686638 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-09 06:04:03.686658 | orchestrator | Thursday 09 April 2026 06:03:42 +0000 (0:00:01.133) 0:52:44.142 ******** 2026-04-09 06:04:03.686670 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:03.686681 | orchestrator | 2026-04-09 06:04:03.686691 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-09 06:04:03.686702 | orchestrator | Thursday 09 April 2026 06:03:43 +0000 (0:00:01.093) 0:52:45.236 ******** 2026-04-09 06:04:03.686713 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:03.686733 | orchestrator | 2026-04-09 06:04:03.686744 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-09 06:04:03.686755 | orchestrator | Thursday 09 April 2026 06:03:44 +0000 (0:00:01.182) 0:52:46.418 ******** 2026-04-09 06:04:03.686765 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:03.686777 | orchestrator | 2026-04-09 06:04:03.686814 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-09 06:04:03.686826 | orchestrator | Thursday 09 April 2026 06:03:45 +0000 (0:00:01.122) 0:52:47.541 ******** 2026-04-09 06:04:03.686837 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:03.686848 | orchestrator | 2026-04-09 06:04:03.686859 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-09 06:04:03.686870 | orchestrator | Thursday 09 April 2026 06:03:46 +0000 (0:00:01.171) 0:52:48.713 ******** 2026-04-09 06:04:03.686881 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:03.686891 | orchestrator | 2026-04-09 06:04:03.686902 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-09 06:04:03.686914 | orchestrator | Thursday 09 April 2026 06:03:47 +0000 (0:00:01.152) 0:52:49.866 ******** 2026-04-09 06:04:03.686924 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:04:03.686935 | orchestrator | 2026-04-09 06:04:03.686946 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-09 06:04:03.686964 | orchestrator | Thursday 09 April 2026 06:03:50 +0000 (0:00:02.018) 0:52:51.884 ******** 2026-04-09 06:04:03.686975 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:04:03.686986 | orchestrator | 2026-04-09 06:04:03.686997 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-09 06:04:03.687008 | orchestrator | Thursday 09 April 2026 06:03:52 +0000 (0:00:02.249) 0:52:54.133 ******** 2026-04-09 06:04:03.687019 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-04-09 06:04:03.687032 | orchestrator | 2026-04-09 06:04:03.687043 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-09 06:04:03.687054 | orchestrator | Thursday 09 April 2026 06:03:53 +0000 (0:00:01.141) 0:52:55.275 ******** 2026-04-09 06:04:03.687065 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:03.687076 | orchestrator | 2026-04-09 06:04:03.687087 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-09 06:04:03.687097 | orchestrator | Thursday 09 April 2026 06:03:54 +0000 (0:00:01.130) 0:52:56.406 ******** 2026-04-09 06:04:03.687108 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:03.687119 | orchestrator | 2026-04-09 06:04:03.687130 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-09 06:04:03.687141 | orchestrator | Thursday 09 April 2026 06:03:55 +0000 (0:00:01.244) 0:52:57.650 ******** 2026-04-09 06:04:03.687152 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 06:04:03.687163 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 06:04:03.687174 | orchestrator | 2026-04-09 06:04:03.687185 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-09 06:04:03.687196 | orchestrator | Thursday 09 April 2026 06:03:57 +0000 (0:00:01.848) 0:52:59.499 ******** 2026-04-09 06:04:03.687207 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:04:03.687218 | orchestrator | 2026-04-09 06:04:03.687229 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-09 06:04:03.687240 | orchestrator | Thursday 09 April 2026 06:03:59 +0000 (0:00:01.447) 0:53:00.946 ******** 2026-04-09 06:04:03.687251 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:03.687262 | orchestrator | 2026-04-09 06:04:03.687273 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-09 06:04:03.687284 | orchestrator | Thursday 09 April 2026 06:04:00 +0000 (0:00:01.160) 0:53:02.107 ******** 2026-04-09 06:04:03.687295 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:03.687306 | orchestrator | 2026-04-09 06:04:03.687317 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-09 06:04:03.687335 | orchestrator | Thursday 09 April 2026 06:04:01 +0000 (0:00:01.153) 0:53:03.261 ******** 2026-04-09 06:04:03.687346 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:03.687357 | orchestrator | 2026-04-09 06:04:03.687368 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-09 06:04:03.687379 | orchestrator | Thursday 09 April 2026 06:04:02 +0000 (0:00:01.152) 0:53:04.413 ******** 2026-04-09 06:04:03.687390 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-04-09 06:04:03.687401 | orchestrator | 2026-04-09 06:04:03.687412 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-09 06:04:03.687430 | orchestrator | Thursday 09 April 2026 06:04:03 +0000 (0:00:01.132) 0:53:05.545 ******** 2026-04-09 06:04:49.638012 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:04:49.638174 | orchestrator | 2026-04-09 06:04:49.638191 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-09 06:04:49.638204 | orchestrator | Thursday 09 April 2026 06:04:05 +0000 (0:00:01.915) 0:53:07.460 ******** 2026-04-09 06:04:49.638216 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 06:04:49.638228 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 06:04:49.638239 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 06:04:49.638250 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:49.638262 | orchestrator | 2026-04-09 06:04:49.638273 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-09 06:04:49.638285 | orchestrator | Thursday 09 April 2026 06:04:06 +0000 (0:00:01.165) 0:53:08.626 ******** 2026-04-09 06:04:49.638296 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:49.638307 | orchestrator | 2026-04-09 06:04:49.638318 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-09 06:04:49.638330 | orchestrator | Thursday 09 April 2026 06:04:07 +0000 (0:00:01.154) 0:53:09.781 ******** 2026-04-09 06:04:49.638341 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:49.638352 | orchestrator | 2026-04-09 06:04:49.638363 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-09 06:04:49.638374 | orchestrator | Thursday 09 April 2026 06:04:09 +0000 (0:00:01.158) 0:53:10.940 ******** 2026-04-09 06:04:49.638385 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:49.638396 | orchestrator | 2026-04-09 06:04:49.638407 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-09 06:04:49.638418 | orchestrator | Thursday 09 April 2026 06:04:10 +0000 (0:00:01.172) 0:53:12.113 ******** 2026-04-09 06:04:49.638429 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:49.638441 | orchestrator | 2026-04-09 06:04:49.638451 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-09 06:04:49.638463 | orchestrator | Thursday 09 April 2026 06:04:11 +0000 (0:00:01.163) 0:53:13.276 ******** 2026-04-09 06:04:49.638474 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:49.638485 | orchestrator | 2026-04-09 06:04:49.638496 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-09 06:04:49.638507 | orchestrator | Thursday 09 April 2026 06:04:12 +0000 (0:00:01.147) 0:53:14.424 ******** 2026-04-09 06:04:49.638518 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:04:49.638529 | orchestrator | 2026-04-09 06:04:49.638543 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-09 06:04:49.638573 | orchestrator | Thursday 09 April 2026 06:04:15 +0000 (0:00:02.509) 0:53:16.934 ******** 2026-04-09 06:04:49.638586 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:04:49.638599 | orchestrator | 2026-04-09 06:04:49.638614 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-09 06:04:49.638628 | orchestrator | Thursday 09 April 2026 06:04:16 +0000 (0:00:01.158) 0:53:18.093 ******** 2026-04-09 06:04:49.638641 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-04-09 06:04:49.638678 | orchestrator | 2026-04-09 06:04:49.638692 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-09 06:04:49.638707 | orchestrator | Thursday 09 April 2026 06:04:17 +0000 (0:00:01.126) 0:53:19.219 ******** 2026-04-09 06:04:49.638720 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:49.638733 | orchestrator | 2026-04-09 06:04:49.638746 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-09 06:04:49.638759 | orchestrator | Thursday 09 April 2026 06:04:18 +0000 (0:00:01.186) 0:53:20.406 ******** 2026-04-09 06:04:49.638773 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:49.638787 | orchestrator | 2026-04-09 06:04:49.638868 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-09 06:04:49.638882 | orchestrator | Thursday 09 April 2026 06:04:19 +0000 (0:00:01.123) 0:53:21.529 ******** 2026-04-09 06:04:49.638896 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:49.638909 | orchestrator | 2026-04-09 06:04:49.638920 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-09 06:04:49.638931 | orchestrator | Thursday 09 April 2026 06:04:20 +0000 (0:00:01.235) 0:53:22.765 ******** 2026-04-09 06:04:49.638942 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:49.638953 | orchestrator | 2026-04-09 06:04:49.638964 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-09 06:04:49.638975 | orchestrator | Thursday 09 April 2026 06:04:22 +0000 (0:00:01.227) 0:53:23.992 ******** 2026-04-09 06:04:49.638986 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:49.638997 | orchestrator | 2026-04-09 06:04:49.639008 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-09 06:04:49.639019 | orchestrator | Thursday 09 April 2026 06:04:23 +0000 (0:00:01.147) 0:53:25.140 ******** 2026-04-09 06:04:49.639030 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:49.639041 | orchestrator | 2026-04-09 06:04:49.639052 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-09 06:04:49.639063 | orchestrator | Thursday 09 April 2026 06:04:24 +0000 (0:00:01.146) 0:53:26.287 ******** 2026-04-09 06:04:49.639074 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:49.639085 | orchestrator | 2026-04-09 06:04:49.639096 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-09 06:04:49.639107 | orchestrator | Thursday 09 April 2026 06:04:25 +0000 (0:00:01.162) 0:53:27.450 ******** 2026-04-09 06:04:49.639118 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:49.639129 | orchestrator | 2026-04-09 06:04:49.639140 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-09 06:04:49.639151 | orchestrator | Thursday 09 April 2026 06:04:26 +0000 (0:00:01.166) 0:53:28.617 ******** 2026-04-09 06:04:49.639162 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:04:49.639173 | orchestrator | 2026-04-09 06:04:49.639184 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-09 06:04:49.639213 | orchestrator | Thursday 09 April 2026 06:04:27 +0000 (0:00:01.159) 0:53:29.776 ******** 2026-04-09 06:04:49.639225 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-04-09 06:04:49.639237 | orchestrator | 2026-04-09 06:04:49.639248 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-09 06:04:49.639259 | orchestrator | Thursday 09 April 2026 06:04:29 +0000 (0:00:01.108) 0:53:30.885 ******** 2026-04-09 06:04:49.639270 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-04-09 06:04:49.639282 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-09 06:04:49.639293 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-09 06:04:49.639304 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-09 06:04:49.639315 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-09 06:04:49.639326 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-09 06:04:49.639336 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-09 06:04:49.639357 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-09 06:04:49.639368 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 06:04:49.639379 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 06:04:49.639388 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 06:04:49.639398 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 06:04:49.639408 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 06:04:49.639418 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 06:04:49.639427 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-04-09 06:04:49.639437 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-04-09 06:04:49.639447 | orchestrator | 2026-04-09 06:04:49.639457 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-09 06:04:49.639467 | orchestrator | Thursday 09 April 2026 06:04:35 +0000 (0:00:06.541) 0:53:37.427 ******** 2026-04-09 06:04:49.639477 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-04-09 06:04:49.639487 | orchestrator | 2026-04-09 06:04:49.639497 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-09 06:04:49.639507 | orchestrator | Thursday 09 April 2026 06:04:36 +0000 (0:00:01.096) 0:53:38.523 ******** 2026-04-09 06:04:49.639522 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 06:04:49.639534 | orchestrator | 2026-04-09 06:04:49.639544 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-09 06:04:49.639554 | orchestrator | Thursday 09 April 2026 06:04:38 +0000 (0:00:01.618) 0:53:40.141 ******** 2026-04-09 06:04:49.639564 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 06:04:49.639574 | orchestrator | 2026-04-09 06:04:49.639583 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-09 06:04:49.639593 | orchestrator | Thursday 09 April 2026 06:04:40 +0000 (0:00:02.043) 0:53:42.185 ******** 2026-04-09 06:04:49.639603 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:49.639613 | orchestrator | 2026-04-09 06:04:49.639623 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-09 06:04:49.639633 | orchestrator | Thursday 09 April 2026 06:04:41 +0000 (0:00:01.176) 0:53:43.362 ******** 2026-04-09 06:04:49.639643 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:49.639652 | orchestrator | 2026-04-09 06:04:49.639662 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-09 06:04:49.639672 | orchestrator | Thursday 09 April 2026 06:04:42 +0000 (0:00:01.136) 0:53:44.498 ******** 2026-04-09 06:04:49.639682 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:49.639692 | orchestrator | 2026-04-09 06:04:49.639701 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-09 06:04:49.639711 | orchestrator | Thursday 09 April 2026 06:04:43 +0000 (0:00:01.120) 0:53:45.619 ******** 2026-04-09 06:04:49.639721 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:49.639731 | orchestrator | 2026-04-09 06:04:49.639740 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-09 06:04:49.639750 | orchestrator | Thursday 09 April 2026 06:04:44 +0000 (0:00:01.120) 0:53:46.740 ******** 2026-04-09 06:04:49.639760 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:49.639770 | orchestrator | 2026-04-09 06:04:49.639780 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-09 06:04:49.639804 | orchestrator | Thursday 09 April 2026 06:04:46 +0000 (0:00:01.144) 0:53:47.884 ******** 2026-04-09 06:04:49.639815 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:49.639824 | orchestrator | 2026-04-09 06:04:49.639842 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-09 06:04:49.639852 | orchestrator | Thursday 09 April 2026 06:04:47 +0000 (0:00:01.178) 0:53:49.063 ******** 2026-04-09 06:04:49.639862 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:49.639871 | orchestrator | 2026-04-09 06:04:49.639881 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-09 06:04:49.639891 | orchestrator | Thursday 09 April 2026 06:04:48 +0000 (0:00:01.150) 0:53:50.214 ******** 2026-04-09 06:04:49.639901 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:49.639911 | orchestrator | 2026-04-09 06:04:49.639920 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-09 06:04:49.639930 | orchestrator | Thursday 09 April 2026 06:04:49 +0000 (0:00:01.134) 0:53:51.349 ******** 2026-04-09 06:04:49.639940 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:04:49.639950 | orchestrator | 2026-04-09 06:04:49.639965 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-09 06:05:45.437758 | orchestrator | Thursday 09 April 2026 06:04:50 +0000 (0:00:01.163) 0:53:52.512 ******** 2026-04-09 06:05:45.437911 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:05:45.437926 | orchestrator | 2026-04-09 06:05:45.437937 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-09 06:05:45.437946 | orchestrator | Thursday 09 April 2026 06:04:51 +0000 (0:00:01.146) 0:53:53.659 ******** 2026-04-09 06:05:45.437956 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:05:45.437965 | orchestrator | 2026-04-09 06:05:45.437974 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-09 06:05:45.437984 | orchestrator | Thursday 09 April 2026 06:04:52 +0000 (0:00:01.152) 0:53:54.812 ******** 2026-04-09 06:05:45.437993 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-04-09 06:05:45.438002 | orchestrator | 2026-04-09 06:05:45.438011 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-09 06:05:45.438075 | orchestrator | Thursday 09 April 2026 06:04:57 +0000 (0:00:04.480) 0:53:59.292 ******** 2026-04-09 06:05:45.438086 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 06:05:45.438096 | orchestrator | 2026-04-09 06:05:45.438105 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-09 06:05:45.438115 | orchestrator | Thursday 09 April 2026 06:04:58 +0000 (0:00:01.246) 0:54:00.539 ******** 2026-04-09 06:05:45.438126 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-04-09 06:05:45.438138 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-04-09 06:05:45.438148 | orchestrator | 2026-04-09 06:05:45.438171 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-09 06:05:45.438180 | orchestrator | Thursday 09 April 2026 06:05:03 +0000 (0:00:04.926) 0:54:05.465 ******** 2026-04-09 06:05:45.438189 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:05:45.438199 | orchestrator | 2026-04-09 06:05:45.438207 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-09 06:05:45.438216 | orchestrator | Thursday 09 April 2026 06:05:04 +0000 (0:00:01.155) 0:54:06.621 ******** 2026-04-09 06:05:45.438225 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:05:45.438234 | orchestrator | 2026-04-09 06:05:45.438243 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 06:05:45.438270 | orchestrator | Thursday 09 April 2026 06:05:05 +0000 (0:00:01.115) 0:54:07.737 ******** 2026-04-09 06:05:45.438279 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:05:45.438288 | orchestrator | 2026-04-09 06:05:45.438297 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 06:05:45.438309 | orchestrator | Thursday 09 April 2026 06:05:07 +0000 (0:00:01.147) 0:54:08.884 ******** 2026-04-09 06:05:45.438319 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:05:45.438330 | orchestrator | 2026-04-09 06:05:45.438341 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 06:05:45.438351 | orchestrator | Thursday 09 April 2026 06:05:08 +0000 (0:00:01.131) 0:54:10.016 ******** 2026-04-09 06:05:45.438361 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:05:45.438371 | orchestrator | 2026-04-09 06:05:45.438382 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 06:05:45.438393 | orchestrator | Thursday 09 April 2026 06:05:09 +0000 (0:00:01.120) 0:54:11.137 ******** 2026-04-09 06:05:45.438404 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:05:45.438415 | orchestrator | 2026-04-09 06:05:45.438426 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 06:05:45.438436 | orchestrator | Thursday 09 April 2026 06:05:10 +0000 (0:00:01.280) 0:54:12.418 ******** 2026-04-09 06:05:45.438447 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-09 06:05:45.438458 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-09 06:05:45.438469 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-09 06:05:45.438479 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:05:45.438489 | orchestrator | 2026-04-09 06:05:45.438500 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 06:05:45.438511 | orchestrator | Thursday 09 April 2026 06:05:11 +0000 (0:00:01.386) 0:54:13.805 ******** 2026-04-09 06:05:45.438522 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-09 06:05:45.438533 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-09 06:05:45.438543 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-09 06:05:45.438553 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:05:45.438564 | orchestrator | 2026-04-09 06:05:45.438575 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 06:05:45.438585 | orchestrator | Thursday 09 April 2026 06:05:13 +0000 (0:00:01.356) 0:54:15.161 ******** 2026-04-09 06:05:45.438595 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-09 06:05:45.438606 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-09 06:05:45.438617 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-09 06:05:45.438642 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:05:45.438653 | orchestrator | 2026-04-09 06:05:45.438663 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 06:05:45.438672 | orchestrator | Thursday 09 April 2026 06:05:14 +0000 (0:00:01.429) 0:54:16.591 ******** 2026-04-09 06:05:45.438681 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:05:45.438690 | orchestrator | 2026-04-09 06:05:45.438698 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 06:05:45.438707 | orchestrator | Thursday 09 April 2026 06:05:15 +0000 (0:00:01.216) 0:54:17.807 ******** 2026-04-09 06:05:45.438716 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-09 06:05:45.438725 | orchestrator | 2026-04-09 06:05:45.438734 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-09 06:05:45.438743 | orchestrator | Thursday 09 April 2026 06:05:17 +0000 (0:00:01.911) 0:54:19.719 ******** 2026-04-09 06:05:45.438752 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:05:45.438760 | orchestrator | 2026-04-09 06:05:45.438769 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-09 06:05:45.438784 | orchestrator | Thursday 09 April 2026 06:05:19 +0000 (0:00:01.773) 0:54:21.493 ******** 2026-04-09 06:05:45.438793 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:05:45.438821 | orchestrator | 2026-04-09 06:05:45.438830 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-09 06:05:45.438839 | orchestrator | Thursday 09 April 2026 06:05:20 +0000 (0:00:01.144) 0:54:22.637 ******** 2026-04-09 06:05:45.438848 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-4 2026-04-09 06:05:45.438857 | orchestrator | 2026-04-09 06:05:45.438865 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-09 06:05:45.438874 | orchestrator | Thursday 09 April 2026 06:05:22 +0000 (0:00:01.443) 0:54:24.081 ******** 2026-04-09 06:05:45.438883 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-09 06:05:45.438892 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-09 06:05:45.438901 | orchestrator | 2026-04-09 06:05:45.438910 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-09 06:05:45.438918 | orchestrator | Thursday 09 April 2026 06:05:24 +0000 (0:00:01.814) 0:54:25.896 ******** 2026-04-09 06:05:45.438927 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 06:05:45.438936 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-09 06:05:45.438945 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 06:05:45.438954 | orchestrator | 2026-04-09 06:05:45.438967 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-09 06:05:45.438976 | orchestrator | Thursday 09 April 2026 06:05:27 +0000 (0:00:03.544) 0:54:29.440 ******** 2026-04-09 06:05:45.438985 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-09 06:05:45.438994 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-09 06:05:45.439003 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:05:45.439012 | orchestrator | 2026-04-09 06:05:45.439020 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-09 06:05:45.439029 | orchestrator | Thursday 09 April 2026 06:05:29 +0000 (0:00:01.950) 0:54:31.391 ******** 2026-04-09 06:05:45.439038 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:05:45.439047 | orchestrator | 2026-04-09 06:05:45.439056 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-09 06:05:45.439064 | orchestrator | Thursday 09 April 2026 06:05:31 +0000 (0:00:01.561) 0:54:32.953 ******** 2026-04-09 06:05:45.439073 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:05:45.439082 | orchestrator | 2026-04-09 06:05:45.439091 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-09 06:05:45.439099 | orchestrator | Thursday 09 April 2026 06:05:32 +0000 (0:00:01.148) 0:54:34.101 ******** 2026-04-09 06:05:45.439108 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-4 2026-04-09 06:05:45.439118 | orchestrator | 2026-04-09 06:05:45.439126 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-09 06:05:45.439135 | orchestrator | Thursday 09 April 2026 06:05:33 +0000 (0:00:01.451) 0:54:35.553 ******** 2026-04-09 06:05:45.439144 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-4 2026-04-09 06:05:45.439153 | orchestrator | 2026-04-09 06:05:45.439161 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-09 06:05:45.439170 | orchestrator | Thursday 09 April 2026 06:05:35 +0000 (0:00:01.566) 0:54:37.120 ******** 2026-04-09 06:05:45.439179 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:05:45.439188 | orchestrator | 2026-04-09 06:05:45.439197 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-09 06:05:45.439205 | orchestrator | Thursday 09 April 2026 06:05:37 +0000 (0:00:02.022) 0:54:39.142 ******** 2026-04-09 06:05:45.439214 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:05:45.439223 | orchestrator | 2026-04-09 06:05:45.439231 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-09 06:05:45.439247 | orchestrator | Thursday 09 April 2026 06:05:39 +0000 (0:00:01.941) 0:54:41.083 ******** 2026-04-09 06:05:45.439256 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:05:45.439265 | orchestrator | 2026-04-09 06:05:45.439274 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-09 06:05:45.439282 | orchestrator | Thursday 09 April 2026 06:05:41 +0000 (0:00:02.273) 0:54:43.357 ******** 2026-04-09 06:05:45.439291 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:05:45.439300 | orchestrator | 2026-04-09 06:05:45.439309 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-09 06:05:45.439318 | orchestrator | Thursday 09 April 2026 06:05:43 +0000 (0:00:02.282) 0:54:45.639 ******** 2026-04-09 06:05:45.439326 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:05:45.439335 | orchestrator | 2026-04-09 06:05:45.439344 | orchestrator | TASK [Restart ceph mds] ******************************************************** 2026-04-09 06:05:45.439353 | orchestrator | Thursday 09 April 2026 06:05:45 +0000 (0:00:01.610) 0:54:47.249 ******** 2026-04-09 06:05:45.439367 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:06:20.041399 | orchestrator | 2026-04-09 06:06:20.041522 | orchestrator | TASK [Restart active mds] ****************************************************** 2026-04-09 06:06:20.041539 | orchestrator | Thursday 09 April 2026 06:05:46 +0000 (0:00:01.113) 0:54:48.362 ******** 2026-04-09 06:06:20.041552 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:06:20.041565 | orchestrator | 2026-04-09 06:06:20.041576 | orchestrator | PLAY [Upgrade standbys ceph mdss cluster] ************************************** 2026-04-09 06:06:20.041588 | orchestrator | 2026-04-09 06:06:20.041599 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 06:06:20.041611 | orchestrator | Thursday 09 April 2026 06:05:56 +0000 (0:00:09.808) 0:54:58.172 ******** 2026-04-09 06:06:20.041623 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-5 2026-04-09 06:06:20.041635 | orchestrator | 2026-04-09 06:06:20.041646 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-09 06:06:20.041656 | orchestrator | Thursday 09 April 2026 06:05:57 +0000 (0:00:01.517) 0:54:59.690 ******** 2026-04-09 06:06:20.041667 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:06:20.041678 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:06:20.041689 | orchestrator | 2026-04-09 06:06:20.041700 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-09 06:06:20.041711 | orchestrator | Thursday 09 April 2026 06:05:59 +0000 (0:00:01.547) 0:55:01.237 ******** 2026-04-09 06:06:20.041722 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:06:20.041733 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:06:20.041744 | orchestrator | 2026-04-09 06:06:20.041755 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 06:06:20.041766 | orchestrator | Thursday 09 April 2026 06:06:00 +0000 (0:00:01.228) 0:55:02.465 ******** 2026-04-09 06:06:20.041777 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:06:20.041788 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:06:20.041799 | orchestrator | 2026-04-09 06:06:20.041836 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 06:06:20.041849 | orchestrator | Thursday 09 April 2026 06:06:02 +0000 (0:00:01.601) 0:55:04.067 ******** 2026-04-09 06:06:20.041860 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:06:20.041871 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:06:20.041882 | orchestrator | 2026-04-09 06:06:20.041893 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-09 06:06:20.041904 | orchestrator | Thursday 09 April 2026 06:06:03 +0000 (0:00:01.261) 0:55:05.329 ******** 2026-04-09 06:06:20.041915 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:06:20.041926 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:06:20.041939 | orchestrator | 2026-04-09 06:06:20.041969 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-09 06:06:20.041984 | orchestrator | Thursday 09 April 2026 06:06:04 +0000 (0:00:01.247) 0:55:06.577 ******** 2026-04-09 06:06:20.042091 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:06:20.042110 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:06:20.042124 | orchestrator | 2026-04-09 06:06:20.042138 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-09 06:06:20.042152 | orchestrator | Thursday 09 April 2026 06:06:06 +0000 (0:00:01.585) 0:55:08.163 ******** 2026-04-09 06:06:20.042165 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:06:20.042180 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:06:20.042193 | orchestrator | 2026-04-09 06:06:20.042206 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-09 06:06:20.042220 | orchestrator | Thursday 09 April 2026 06:06:07 +0000 (0:00:01.257) 0:55:09.421 ******** 2026-04-09 06:06:20.042233 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:06:20.042246 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:06:20.042260 | orchestrator | 2026-04-09 06:06:20.042273 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-09 06:06:20.042287 | orchestrator | Thursday 09 April 2026 06:06:08 +0000 (0:00:01.284) 0:55:10.706 ******** 2026-04-09 06:06:20.042299 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 06:06:20.042309 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 06:06:20.042320 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 06:06:20.042331 | orchestrator | 2026-04-09 06:06:20.042342 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-09 06:06:20.042353 | orchestrator | Thursday 09 April 2026 06:06:10 +0000 (0:00:01.706) 0:55:12.412 ******** 2026-04-09 06:06:20.042364 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:06:20.042376 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:06:20.042386 | orchestrator | 2026-04-09 06:06:20.042397 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-09 06:06:20.042408 | orchestrator | Thursday 09 April 2026 06:06:12 +0000 (0:00:01.473) 0:55:13.886 ******** 2026-04-09 06:06:20.042419 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 06:06:20.042430 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 06:06:20.042441 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 06:06:20.042452 | orchestrator | 2026-04-09 06:06:20.042462 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-09 06:06:20.042473 | orchestrator | Thursday 09 April 2026 06:06:15 +0000 (0:00:03.219) 0:55:17.105 ******** 2026-04-09 06:06:20.042484 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-09 06:06:20.042496 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-09 06:06:20.042506 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-09 06:06:20.042517 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:06:20.042528 | orchestrator | 2026-04-09 06:06:20.042539 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-09 06:06:20.042550 | orchestrator | Thursday 09 April 2026 06:06:16 +0000 (0:00:01.445) 0:55:18.551 ******** 2026-04-09 06:06:20.042582 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 06:06:20.042597 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 06:06:20.042609 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 06:06:20.042629 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:06:20.042641 | orchestrator | 2026-04-09 06:06:20.042652 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-09 06:06:20.042663 | orchestrator | Thursday 09 April 2026 06:06:18 +0000 (0:00:02.054) 0:55:20.605 ******** 2026-04-09 06:06:20.042677 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:20.042691 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:20.042708 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:20.042720 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:06:20.042731 | orchestrator | 2026-04-09 06:06:20.042742 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-09 06:06:20.042753 | orchestrator | Thursday 09 April 2026 06:06:19 +0000 (0:00:01.187) 0:55:21.792 ******** 2026-04-09 06:06:20.042767 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '69d38aa54653', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-09 06:06:12.539385', 'end': '2026-04-09 06:06:12.593225', 'delta': '0:00:00.053840', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['69d38aa54653'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-09 06:06:20.042782 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '3e7867c40460', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-09 06:06:13.138478', 'end': '2026-04-09 06:06:13.194261', 'delta': '0:00:00.055783', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3e7867c40460'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-09 06:06:20.042803 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '5ed6058fb18c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-09 06:06:14.027866', 'end': '2026-04-09 06:06:14.069193', 'delta': '0:00:00.041327', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5ed6058fb18c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-09 06:06:40.413213 | orchestrator | 2026-04-09 06:06:40.413290 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-09 06:06:40.413297 | orchestrator | Thursday 09 April 2026 06:06:21 +0000 (0:00:01.272) 0:55:23.065 ******** 2026-04-09 06:06:40.413302 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:06:40.413307 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:06:40.413311 | orchestrator | 2026-04-09 06:06:40.413316 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-09 06:06:40.413320 | orchestrator | Thursday 09 April 2026 06:06:22 +0000 (0:00:01.367) 0:55:24.433 ******** 2026-04-09 06:06:40.413324 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:06:40.413329 | orchestrator | 2026-04-09 06:06:40.413333 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-09 06:06:40.413337 | orchestrator | Thursday 09 April 2026 06:06:23 +0000 (0:00:01.254) 0:55:25.687 ******** 2026-04-09 06:06:40.413341 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:06:40.413345 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:06:40.413349 | orchestrator | 2026-04-09 06:06:40.413353 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-09 06:06:40.413357 | orchestrator | Thursday 09 April 2026 06:06:25 +0000 (0:00:01.254) 0:55:26.942 ******** 2026-04-09 06:06:40.413361 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 06:06:40.413366 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-09 06:06:40.413370 | orchestrator | 2026-04-09 06:06:40.413373 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 06:06:40.413377 | orchestrator | Thursday 09 April 2026 06:06:27 +0000 (0:00:02.273) 0:55:29.215 ******** 2026-04-09 06:06:40.413381 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:06:40.413385 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:06:40.413389 | orchestrator | 2026-04-09 06:06:40.413403 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-09 06:06:40.413408 | orchestrator | Thursday 09 April 2026 06:06:28 +0000 (0:00:01.387) 0:55:30.602 ******** 2026-04-09 06:06:40.413412 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:06:40.413416 | orchestrator | 2026-04-09 06:06:40.413419 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-09 06:06:40.413423 | orchestrator | Thursday 09 April 2026 06:06:29 +0000 (0:00:01.129) 0:55:31.731 ******** 2026-04-09 06:06:40.413427 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:06:40.413431 | orchestrator | 2026-04-09 06:06:40.413435 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 06:06:40.413439 | orchestrator | Thursday 09 April 2026 06:06:31 +0000 (0:00:01.245) 0:55:32.977 ******** 2026-04-09 06:06:40.413443 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:06:40.413447 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:06:40.413451 | orchestrator | 2026-04-09 06:06:40.413455 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-09 06:06:40.413459 | orchestrator | Thursday 09 April 2026 06:06:32 +0000 (0:00:01.238) 0:55:34.215 ******** 2026-04-09 06:06:40.413463 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:06:40.413467 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:06:40.413471 | orchestrator | 2026-04-09 06:06:40.413474 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-09 06:06:40.413478 | orchestrator | Thursday 09 April 2026 06:06:33 +0000 (0:00:01.566) 0:55:35.782 ******** 2026-04-09 06:06:40.413482 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:06:40.413486 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:06:40.413490 | orchestrator | 2026-04-09 06:06:40.413494 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-09 06:06:40.413498 | orchestrator | Thursday 09 April 2026 06:06:35 +0000 (0:00:01.312) 0:55:37.094 ******** 2026-04-09 06:06:40.413514 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:06:40.413518 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:06:40.413522 | orchestrator | 2026-04-09 06:06:40.413526 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-09 06:06:40.413530 | orchestrator | Thursday 09 April 2026 06:06:36 +0000 (0:00:01.213) 0:55:38.307 ******** 2026-04-09 06:06:40.413534 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:06:40.413538 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:06:40.413541 | orchestrator | 2026-04-09 06:06:40.413545 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-09 06:06:40.413549 | orchestrator | Thursday 09 April 2026 06:06:37 +0000 (0:00:01.278) 0:55:39.586 ******** 2026-04-09 06:06:40.413553 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:06:40.413557 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:06:40.413561 | orchestrator | 2026-04-09 06:06:40.413565 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-09 06:06:40.413569 | orchestrator | Thursday 09 April 2026 06:06:38 +0000 (0:00:01.224) 0:55:40.810 ******** 2026-04-09 06:06:40.413573 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:06:40.413577 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:06:40.413581 | orchestrator | 2026-04-09 06:06:40.413585 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-09 06:06:40.413589 | orchestrator | Thursday 09 April 2026 06:06:40 +0000 (0:00:01.296) 0:55:42.107 ******** 2026-04-09 06:06:40.413594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:06:40.413609 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1db77c01--2d77--5e1e--8d0a--4e535706b141-osd--block--1db77c01--2d77--5e1e--8d0a--4e535706b141', 'dm-uuid-LVM-Vd5rxgKUs73TU4Fbsf9r5IJx2JqOBc0dUhLn892OVFegRXfoAocTUnq1hUBwAqbQ'], 'uuids': ['9adc5058-59dc-41de-adf6-afc54c646e02'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '162ed735', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['UhLn89-2OVF-egRX-foAo-cTUn-q1hU-BwAqbQ']}})  2026-04-09 06:06:40.413616 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be', 'scsi-SQEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5d5b0f3e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 06:06:40.413624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-e4gHGq-azk6-pcuI-7Nw2-ZeJR-MqdR-RoIdn8', 'scsi-0QEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761', 'scsi-SQEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f5862b72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2f59a7c8--f88e--51a3--9620--37640e0ff9b5-osd--block--2f59a7c8--f88e--51a3--9620--37640e0ff9b5']}})  2026-04-09 06:06:40.413636 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:06:40.413641 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:06:40.413645 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-11-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-09 06:06:40.413650 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:06:40.413658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-YYkPY1-YV0V-Afba-Bksc-BIap-C0b1-XJToPw', 'dm-uuid-CRYPT-LUKS2-34a00b1693eb41a48240b70c6fb1290d-YYkPY1-YV0V-Afba-Bksc-BIap-C0b1-XJToPw'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-09 06:06:40.496688 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:06:40.496769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2f59a7c8--f88e--51a3--9620--37640e0ff9b5-osd--block--2f59a7c8--f88e--51a3--9620--37640e0ff9b5', 'dm-uuid-LVM-oNDzr1Rndp1i5vNhITRGHxSNPadq9yP2YYkPY1YV0VAfbaBkscBIapC0b1XJToPw'], 'uuids': ['34a00b16-93eb-41a4-8240-b70c6fb1290d'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f5862b72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['YYkPY1-YV0V-Afba-Bksc-BIap-C0b1-XJToPw']}})  2026-04-09 06:06:40.496780 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0XRV4m-9HLY-z9SL-9jLS-42la-BMLC-ZGW5lb', 'scsi-0QEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad', 'scsi-SQEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '162ed735', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1db77c01--2d77--5e1e--8d0a--4e535706b141-osd--block--1db77c01--2d77--5e1e--8d0a--4e535706b141']}})  2026-04-09 06:06:40.496799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:06:40.496804 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:06:40.496843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0bd1f840', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part16', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part14', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part15', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part1', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 06:06:40.496858 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6-osd--block--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6', 'dm-uuid-LVM-9vxFUtTAtak5ZkuWrKKfGIkUHDKTWX7JDm3eA8Y5r9JAYPgNze2tzfH9lV9upfLN'], 'uuids': ['0d8306b6-b8d9-4741-84fa-e650942907f5'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '82469e2d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Dm3eA8-Y5r9-JAYP-gNze-2tzf-H9lV-9upfLN']}})  2026-04-09 06:06:40.496873 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:06:40.496881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d', 'scsi-SQEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e55aa834', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 06:06:40.496889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:06:40.496894 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-mluLyS-UGtI-41vG-BPyx-ooVb-U8x0-Mwltl0', 'scsi-0QEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e', 'scsi-SQEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1aa61eee', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--27c4b53f--c2bf--5253--84b2--9319684e0f9e-osd--block--27c4b53f--c2bf--5253--84b2--9319684e0f9e']}})  2026-04-09 06:06:40.496904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-UhLn89-2OVF-egRX-foAo-cTUn-q1hU-BwAqbQ', 'dm-uuid-CRYPT-LUKS2-9adc505859dc41deadf6afc54c646e02-UhLn89-2OVF-egRX-foAo-cTUn-q1hU-BwAqbQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-09 06:06:40.638536 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:06:40.638641 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:06:40.638675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:06:40.638711 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-09 06:06:40.638725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:06:40.638737 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-IBYDKl-rKL1-8ENK-M1lO-FBMz-Nh1V-9LMTjE', 'dm-uuid-CRYPT-LUKS2-a0c575bd231a435faa33ebc924c5d720-IBYDKl-rKL1-8ENK-M1lO-FBMz-Nh1V-9LMTjE'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-09 06:06:40.638748 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:06:40.638761 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--27c4b53f--c2bf--5253--84b2--9319684e0f9e-osd--block--27c4b53f--c2bf--5253--84b2--9319684e0f9e', 'dm-uuid-LVM-Pc55YnmQ0VktCZR8gyBjVYB68xFeQ7SyIBYDKlrKL18ENKM1lOFBMzNh1V9LMTjE'], 'uuids': ['a0c575bd-231a-435f-aa33-ebc924c5d720'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1aa61eee', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['IBYDKl-rKL1-8ENK-M1lO-FBMz-Nh1V-9LMTjE']}})  2026-04-09 06:06:40.638808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-BrMDw9-eTd2-RE46-4U0W-jzmp-yuNA-1uTxr3', 'scsi-0QEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4', 'scsi-SQEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '82469e2d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6-osd--block--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6']}})  2026-04-09 06:06:40.638892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:06:40.638964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '53e4edfb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part16', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part14', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part15', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part1', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 06:06:40.638982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:06:40.638995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:06:40.639017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Dm3eA8-Y5r9-JAYP-gNze-2tzf-H9lV-9upfLN', 'dm-uuid-CRYPT-LUKS2-0d8306b6b8d9474184fae650942907f5-Dm3eA8-Y5r9-JAYP-gNze-2tzf-H9lV-9upfLN'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-09 06:06:42.102888 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:06:42.103021 | orchestrator | 2026-04-09 06:06:42.103039 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-09 06:06:42.103052 | orchestrator | Thursday 09 April 2026 06:06:41 +0000 (0:00:01.610) 0:55:43.717 ******** 2026-04-09 06:06:42.103081 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.103097 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1db77c01--2d77--5e1e--8d0a--4e535706b141-osd--block--1db77c01--2d77--5e1e--8d0a--4e535706b141', 'dm-uuid-LVM-Vd5rxgKUs73TU4Fbsf9r5IJx2JqOBc0dUhLn892OVFegRXfoAocTUnq1hUBwAqbQ'], 'uuids': ['9adc5058-59dc-41de-adf6-afc54c646e02'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '162ed735', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['UhLn89-2OVF-egRX-foAo-cTUn-q1hU-BwAqbQ']}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.103109 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be', 'scsi-SQEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5d5b0f3e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.103122 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-e4gHGq-azk6-pcuI-7Nw2-ZeJR-MqdR-RoIdn8', 'scsi-0QEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761', 'scsi-SQEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f5862b72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2f59a7c8--f88e--51a3--9620--37640e0ff9b5-osd--block--2f59a7c8--f88e--51a3--9620--37640e0ff9b5']}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.103155 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.103182 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.103196 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-11-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.103208 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.103220 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.103231 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-YYkPY1-YV0V-Afba-Bksc-BIap-C0b1-XJToPw', 'dm-uuid-CRYPT-LUKS2-34a00b1693eb41a48240b70c6fb1290d-YYkPY1-YV0V-Afba-Bksc-BIap-C0b1-XJToPw'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.103251 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6-osd--block--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6', 'dm-uuid-LVM-9vxFUtTAtak5ZkuWrKKfGIkUHDKTWX7JDm3eA8Y5r9JAYPgNze2tzfH9lV9upfLN'], 'uuids': ['0d8306b6-b8d9-4741-84fa-e650942907f5'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '82469e2d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Dm3eA8-Y5r9-JAYP-gNze-2tzf-H9lV-9upfLN']}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.161164 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.161258 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d', 'scsi-SQEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e55aa834', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.161274 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2f59a7c8--f88e--51a3--9620--37640e0ff9b5-osd--block--2f59a7c8--f88e--51a3--9620--37640e0ff9b5', 'dm-uuid-LVM-oNDzr1Rndp1i5vNhITRGHxSNPadq9yP2YYkPY1YV0VAfbaBkscBIapC0b1XJToPw'], 'uuids': ['34a00b16-93eb-41a4-8240-b70c6fb1290d'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f5862b72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['YYkPY1-YV0V-Afba-Bksc-BIap-C0b1-XJToPw']}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.161286 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-mluLyS-UGtI-41vG-BPyx-ooVb-U8x0-Mwltl0', 'scsi-0QEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e', 'scsi-SQEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1aa61eee', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--27c4b53f--c2bf--5253--84b2--9319684e0f9e-osd--block--27c4b53f--c2bf--5253--84b2--9319684e0f9e']}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.161335 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0XRV4m-9HLY-z9SL-9jLS-42la-BMLC-ZGW5lb', 'scsi-0QEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad', 'scsi-SQEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '162ed735', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1db77c01--2d77--5e1e--8d0a--4e535706b141-osd--block--1db77c01--2d77--5e1e--8d0a--4e535706b141']}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.161354 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.161366 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.161379 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0bd1f840', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part16', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part14', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part15', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part1', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.161409 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.257332 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.257433 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.257450 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.257463 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.257475 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-UhLn89-2OVF-egRX-foAo-cTUn-q1hU-BwAqbQ', 'dm-uuid-CRYPT-LUKS2-9adc505859dc41deadf6afc54c646e02-UhLn89-2OVF-egRX-foAo-cTUn-q1hU-BwAqbQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.257545 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-IBYDKl-rKL1-8ENK-M1lO-FBMz-Nh1V-9LMTjE', 'dm-uuid-CRYPT-LUKS2-a0c575bd231a435faa33ebc924c5d720-IBYDKl-rKL1-8ENK-M1lO-FBMz-Nh1V-9LMTjE'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.257560 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:06:42.257574 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.257587 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--27c4b53f--c2bf--5253--84b2--9319684e0f9e-osd--block--27c4b53f--c2bf--5253--84b2--9319684e0f9e', 'dm-uuid-LVM-Pc55YnmQ0VktCZR8gyBjVYB68xFeQ7SyIBYDKlrKL18ENKM1lOFBMzNh1V9LMTjE'], 'uuids': ['a0c575bd-231a-435f-aa33-ebc924c5d720'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1aa61eee', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['IBYDKl-rKL1-8ENK-M1lO-FBMz-Nh1V-9LMTjE']}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.257600 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-BrMDw9-eTd2-RE46-4U0W-jzmp-yuNA-1uTxr3', 'scsi-0QEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4', 'scsi-SQEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '82469e2d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6-osd--block--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6']}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.257615 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:06:42.257650 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '53e4edfb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part16', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part14', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part15', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part1', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:07:10.010510 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:07:10.010659 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:07:10.010724 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Dm3eA8-Y5r9-JAYP-gNze-2tzf-H9lV-9upfLN', 'dm-uuid-CRYPT-LUKS2-0d8306b6b8d9474184fae650942907f5-Dm3eA8-Y5r9-JAYP-gNze-2tzf-H9lV-9upfLN'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:07:10.010749 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:10.010772 | orchestrator | 2026-04-09 06:07:10.010793 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-09 06:07:10.010814 | orchestrator | Thursday 09 April 2026 06:06:43 +0000 (0:00:01.614) 0:55:45.332 ******** 2026-04-09 06:07:10.010882 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:07:10.010904 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:07:10.010924 | orchestrator | 2026-04-09 06:07:10.010943 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-09 06:07:10.010963 | orchestrator | Thursday 09 April 2026 06:06:45 +0000 (0:00:01.742) 0:55:47.074 ******** 2026-04-09 06:07:10.010983 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:07:10.011003 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:07:10.011024 | orchestrator | 2026-04-09 06:07:10.011049 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 06:07:10.011072 | orchestrator | Thursday 09 April 2026 06:06:46 +0000 (0:00:01.250) 0:55:48.325 ******** 2026-04-09 06:07:10.011098 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:07:10.011122 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:07:10.011147 | orchestrator | 2026-04-09 06:07:10.011192 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 06:07:10.011219 | orchestrator | Thursday 09 April 2026 06:06:48 +0000 (0:00:01.641) 0:55:49.967 ******** 2026-04-09 06:07:10.011247 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:10.011266 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:10.011285 | orchestrator | 2026-04-09 06:07:10.011311 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 06:07:10.011337 | orchestrator | Thursday 09 April 2026 06:06:49 +0000 (0:00:01.285) 0:55:51.253 ******** 2026-04-09 06:07:10.011358 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:10.011377 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:10.011396 | orchestrator | 2026-04-09 06:07:10.011416 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 06:07:10.011435 | orchestrator | Thursday 09 April 2026 06:06:50 +0000 (0:00:01.302) 0:55:52.556 ******** 2026-04-09 06:07:10.011452 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:10.011472 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:10.011489 | orchestrator | 2026-04-09 06:07:10.011507 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 06:07:10.011527 | orchestrator | Thursday 09 April 2026 06:06:52 +0000 (0:00:01.593) 0:55:54.150 ******** 2026-04-09 06:07:10.011546 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-09 06:07:10.011564 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-09 06:07:10.011582 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-09 06:07:10.011601 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-09 06:07:10.011620 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-09 06:07:10.011639 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-09 06:07:10.011657 | orchestrator | 2026-04-09 06:07:10.011675 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 06:07:10.011710 | orchestrator | Thursday 09 April 2026 06:06:54 +0000 (0:00:01.819) 0:55:55.969 ******** 2026-04-09 06:07:10.011744 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-09 06:07:10.011757 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-09 06:07:10.011768 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-09 06:07:10.011779 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:10.011790 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-09 06:07:10.011802 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-09 06:07:10.011813 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-09 06:07:10.011853 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:10.011866 | orchestrator | 2026-04-09 06:07:10.011877 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-09 06:07:10.011889 | orchestrator | Thursday 09 April 2026 06:06:55 +0000 (0:00:01.265) 0:55:57.235 ******** 2026-04-09 06:07:10.011901 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-5 2026-04-09 06:07:10.011913 | orchestrator | 2026-04-09 06:07:10.011924 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 06:07:10.011937 | orchestrator | Thursday 09 April 2026 06:06:56 +0000 (0:00:01.226) 0:55:58.462 ******** 2026-04-09 06:07:10.011948 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:10.011959 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:10.011970 | orchestrator | 2026-04-09 06:07:10.011981 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 06:07:10.011992 | orchestrator | Thursday 09 April 2026 06:06:57 +0000 (0:00:01.245) 0:55:59.708 ******** 2026-04-09 06:07:10.012003 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:10.012013 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:10.012023 | orchestrator | 2026-04-09 06:07:10.012032 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 06:07:10.012042 | orchestrator | Thursday 09 April 2026 06:06:59 +0000 (0:00:01.348) 0:56:01.057 ******** 2026-04-09 06:07:10.012052 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:10.012062 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:10.012071 | orchestrator | 2026-04-09 06:07:10.012081 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 06:07:10.012091 | orchestrator | Thursday 09 April 2026 06:07:00 +0000 (0:00:01.308) 0:56:02.365 ******** 2026-04-09 06:07:10.012101 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:07:10.012111 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:07:10.012121 | orchestrator | 2026-04-09 06:07:10.012131 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 06:07:10.012140 | orchestrator | Thursday 09 April 2026 06:07:01 +0000 (0:00:01.334) 0:56:03.699 ******** 2026-04-09 06:07:10.012150 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 06:07:10.012160 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 06:07:10.012169 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 06:07:10.012179 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:10.012189 | orchestrator | 2026-04-09 06:07:10.012199 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 06:07:10.012208 | orchestrator | Thursday 09 April 2026 06:07:03 +0000 (0:00:01.440) 0:56:05.140 ******** 2026-04-09 06:07:10.012218 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 06:07:10.012228 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 06:07:10.012238 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 06:07:10.012247 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:10.012257 | orchestrator | 2026-04-09 06:07:10.012267 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 06:07:10.012283 | orchestrator | Thursday 09 April 2026 06:07:04 +0000 (0:00:01.446) 0:56:06.586 ******** 2026-04-09 06:07:10.012301 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 06:07:10.012311 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 06:07:10.012321 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 06:07:10.012330 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:10.012340 | orchestrator | 2026-04-09 06:07:10.012349 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 06:07:10.012359 | orchestrator | Thursday 09 April 2026 06:07:06 +0000 (0:00:01.421) 0:56:08.008 ******** 2026-04-09 06:07:10.012369 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:07:10.012379 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:07:10.012388 | orchestrator | 2026-04-09 06:07:10.012398 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 06:07:10.012408 | orchestrator | Thursday 09 April 2026 06:07:07 +0000 (0:00:01.278) 0:56:09.286 ******** 2026-04-09 06:07:10.012418 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-09 06:07:10.012428 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-09 06:07:10.012438 | orchestrator | 2026-04-09 06:07:10.012447 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-09 06:07:10.012457 | orchestrator | Thursday 09 April 2026 06:07:08 +0000 (0:00:01.485) 0:56:10.771 ******** 2026-04-09 06:07:10.012467 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 06:07:10.012477 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 06:07:10.012486 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 06:07:10.012496 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-09 06:07:10.012506 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 06:07:10.012516 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 06:07:10.012531 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 06:07:54.846682 | orchestrator | 2026-04-09 06:07:54.846776 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-09 06:07:54.846786 | orchestrator | Thursday 09 April 2026 06:07:11 +0000 (0:00:02.202) 0:56:12.974 ******** 2026-04-09 06:07:54.846793 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 06:07:54.846802 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 06:07:54.846809 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 06:07:54.846816 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-09 06:07:54.846824 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 06:07:54.846866 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 06:07:54.846876 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 06:07:54.846883 | orchestrator | 2026-04-09 06:07:54.846890 | orchestrator | TASK [Prevent restarts from the packaging] ************************************* 2026-04-09 06:07:54.846897 | orchestrator | Thursday 09 April 2026 06:07:14 +0000 (0:00:03.061) 0:56:16.036 ******** 2026-04-09 06:07:54.846904 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:54.846912 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:54.846919 | orchestrator | 2026-04-09 06:07:54.846926 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 06:07:54.846933 | orchestrator | Thursday 09 April 2026 06:07:15 +0000 (0:00:01.257) 0:56:17.293 ******** 2026-04-09 06:07:54.846939 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-5 2026-04-09 06:07:54.846968 | orchestrator | 2026-04-09 06:07:54.846975 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 06:07:54.846982 | orchestrator | Thursday 09 April 2026 06:07:16 +0000 (0:00:01.247) 0:56:18.541 ******** 2026-04-09 06:07:54.846989 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-5 2026-04-09 06:07:54.846996 | orchestrator | 2026-04-09 06:07:54.847002 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 06:07:54.847009 | orchestrator | Thursday 09 April 2026 06:07:17 +0000 (0:00:01.250) 0:56:19.791 ******** 2026-04-09 06:07:54.847016 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:54.847023 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:54.847030 | orchestrator | 2026-04-09 06:07:54.847036 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 06:07:54.847043 | orchestrator | Thursday 09 April 2026 06:07:19 +0000 (0:00:01.542) 0:56:21.334 ******** 2026-04-09 06:07:54.847050 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:07:54.847057 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:07:54.847064 | orchestrator | 2026-04-09 06:07:54.847071 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 06:07:54.847078 | orchestrator | Thursday 09 April 2026 06:07:21 +0000 (0:00:01.588) 0:56:22.922 ******** 2026-04-09 06:07:54.847085 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:07:54.847091 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:07:54.847098 | orchestrator | 2026-04-09 06:07:54.847105 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 06:07:54.847112 | orchestrator | Thursday 09 April 2026 06:07:22 +0000 (0:00:01.686) 0:56:24.609 ******** 2026-04-09 06:07:54.847119 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:07:54.847126 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:07:54.847133 | orchestrator | 2026-04-09 06:07:54.847139 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 06:07:54.847146 | orchestrator | Thursday 09 April 2026 06:07:24 +0000 (0:00:01.653) 0:56:26.263 ******** 2026-04-09 06:07:54.847165 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:54.847172 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:54.847179 | orchestrator | 2026-04-09 06:07:54.847185 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 06:07:54.847192 | orchestrator | Thursday 09 April 2026 06:07:25 +0000 (0:00:01.314) 0:56:27.577 ******** 2026-04-09 06:07:54.847199 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:54.847206 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:54.847213 | orchestrator | 2026-04-09 06:07:54.847219 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 06:07:54.847226 | orchestrator | Thursday 09 April 2026 06:07:26 +0000 (0:00:01.242) 0:56:28.820 ******** 2026-04-09 06:07:54.847235 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:54.847244 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:54.847250 | orchestrator | 2026-04-09 06:07:54.847256 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 06:07:54.847263 | orchestrator | Thursday 09 April 2026 06:07:28 +0000 (0:00:01.256) 0:56:30.076 ******** 2026-04-09 06:07:54.847273 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:07:54.847282 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:07:54.847292 | orchestrator | 2026-04-09 06:07:54.847302 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 06:07:54.847312 | orchestrator | Thursday 09 April 2026 06:07:29 +0000 (0:00:01.633) 0:56:31.710 ******** 2026-04-09 06:07:54.847322 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:07:54.847331 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:07:54.847341 | orchestrator | 2026-04-09 06:07:54.847350 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 06:07:54.847360 | orchestrator | Thursday 09 April 2026 06:07:31 +0000 (0:00:01.629) 0:56:33.340 ******** 2026-04-09 06:07:54.847370 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:54.847386 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:54.847395 | orchestrator | 2026-04-09 06:07:54.847406 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 06:07:54.847416 | orchestrator | Thursday 09 April 2026 06:07:32 +0000 (0:00:01.223) 0:56:34.563 ******** 2026-04-09 06:07:54.847426 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:54.847449 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:54.847459 | orchestrator | 2026-04-09 06:07:54.847468 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 06:07:54.847478 | orchestrator | Thursday 09 April 2026 06:07:33 +0000 (0:00:01.234) 0:56:35.797 ******** 2026-04-09 06:07:54.847488 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:07:54.847498 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:07:54.847508 | orchestrator | 2026-04-09 06:07:54.847517 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 06:07:54.847527 | orchestrator | Thursday 09 April 2026 06:07:35 +0000 (0:00:01.262) 0:56:37.060 ******** 2026-04-09 06:07:54.847537 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:07:54.847546 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:07:54.847556 | orchestrator | 2026-04-09 06:07:54.847566 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 06:07:54.847576 | orchestrator | Thursday 09 April 2026 06:07:36 +0000 (0:00:01.233) 0:56:38.294 ******** 2026-04-09 06:07:54.847586 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:07:54.847593 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:07:54.847600 | orchestrator | 2026-04-09 06:07:54.847607 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 06:07:54.847614 | orchestrator | Thursday 09 April 2026 06:07:37 +0000 (0:00:01.322) 0:56:39.617 ******** 2026-04-09 06:07:54.847620 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:54.847627 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:54.847634 | orchestrator | 2026-04-09 06:07:54.847641 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 06:07:54.847648 | orchestrator | Thursday 09 April 2026 06:07:39 +0000 (0:00:01.267) 0:56:40.885 ******** 2026-04-09 06:07:54.847655 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:54.847661 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:54.847668 | orchestrator | 2026-04-09 06:07:54.847675 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 06:07:54.847682 | orchestrator | Thursday 09 April 2026 06:07:40 +0000 (0:00:01.281) 0:56:42.166 ******** 2026-04-09 06:07:54.847688 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:54.847695 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:54.847702 | orchestrator | 2026-04-09 06:07:54.847709 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 06:07:54.847715 | orchestrator | Thursday 09 April 2026 06:07:41 +0000 (0:00:01.238) 0:56:43.405 ******** 2026-04-09 06:07:54.847722 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:07:54.847729 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:07:54.847736 | orchestrator | 2026-04-09 06:07:54.847742 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 06:07:54.847749 | orchestrator | Thursday 09 April 2026 06:07:42 +0000 (0:00:01.296) 0:56:44.701 ******** 2026-04-09 06:07:54.847756 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:07:54.847763 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:07:54.847770 | orchestrator | 2026-04-09 06:07:54.847776 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-09 06:07:54.847783 | orchestrator | Thursday 09 April 2026 06:07:44 +0000 (0:00:01.262) 0:56:45.964 ******** 2026-04-09 06:07:54.847790 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:54.847797 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:54.847804 | orchestrator | 2026-04-09 06:07:54.847810 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-09 06:07:54.847817 | orchestrator | Thursday 09 April 2026 06:07:45 +0000 (0:00:01.713) 0:56:47.678 ******** 2026-04-09 06:07:54.847829 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:54.847855 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:54.847862 | orchestrator | 2026-04-09 06:07:54.847869 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-09 06:07:54.847876 | orchestrator | Thursday 09 April 2026 06:07:47 +0000 (0:00:01.286) 0:56:48.964 ******** 2026-04-09 06:07:54.847883 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:54.847890 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:54.847897 | orchestrator | 2026-04-09 06:07:54.847908 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-09 06:07:54.847915 | orchestrator | Thursday 09 April 2026 06:07:48 +0000 (0:00:01.297) 0:56:50.261 ******** 2026-04-09 06:07:54.847922 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:54.847929 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:54.847936 | orchestrator | 2026-04-09 06:07:54.847943 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-09 06:07:54.847950 | orchestrator | Thursday 09 April 2026 06:07:49 +0000 (0:00:01.261) 0:56:51.523 ******** 2026-04-09 06:07:54.847957 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:54.847964 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:54.847971 | orchestrator | 2026-04-09 06:07:54.847978 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-09 06:07:54.847984 | orchestrator | Thursday 09 April 2026 06:07:50 +0000 (0:00:01.214) 0:56:52.737 ******** 2026-04-09 06:07:54.847991 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:54.847998 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:54.848005 | orchestrator | 2026-04-09 06:07:54.848012 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-09 06:07:54.848019 | orchestrator | Thursday 09 April 2026 06:07:52 +0000 (0:00:01.266) 0:56:54.003 ******** 2026-04-09 06:07:54.848026 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:54.848033 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:54.848040 | orchestrator | 2026-04-09 06:07:54.848047 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-09 06:07:54.848054 | orchestrator | Thursday 09 April 2026 06:07:53 +0000 (0:00:01.245) 0:56:55.249 ******** 2026-04-09 06:07:54.848061 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:54.848068 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:54.848075 | orchestrator | 2026-04-09 06:07:54.848082 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-09 06:07:54.848089 | orchestrator | Thursday 09 April 2026 06:07:54 +0000 (0:00:01.218) 0:56:56.468 ******** 2026-04-09 06:07:54.848096 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:07:54.848103 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:07:54.848110 | orchestrator | 2026-04-09 06:07:54.848121 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-09 06:08:39.706747 | orchestrator | Thursday 09 April 2026 06:07:55 +0000 (0:00:01.245) 0:56:57.713 ******** 2026-04-09 06:08:39.706946 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:08:39.706967 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:08:39.706980 | orchestrator | 2026-04-09 06:08:39.706993 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-09 06:08:39.707005 | orchestrator | Thursday 09 April 2026 06:07:57 +0000 (0:00:01.193) 0:56:58.906 ******** 2026-04-09 06:08:39.707016 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:08:39.707028 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:08:39.707039 | orchestrator | 2026-04-09 06:08:39.707051 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-09 06:08:39.707062 | orchestrator | Thursday 09 April 2026 06:07:58 +0000 (0:00:01.222) 0:57:00.129 ******** 2026-04-09 06:08:39.707073 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:08:39.707084 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:08:39.707095 | orchestrator | 2026-04-09 06:08:39.707106 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-09 06:08:39.707144 | orchestrator | Thursday 09 April 2026 06:07:59 +0000 (0:00:01.232) 0:57:01.361 ******** 2026-04-09 06:08:39.707155 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:08:39.707167 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:08:39.707178 | orchestrator | 2026-04-09 06:08:39.707189 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-09 06:08:39.707200 | orchestrator | Thursday 09 April 2026 06:08:01 +0000 (0:00:02.125) 0:57:03.487 ******** 2026-04-09 06:08:39.707212 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:08:39.707222 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:08:39.707233 | orchestrator | 2026-04-09 06:08:39.707245 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-09 06:08:39.707256 | orchestrator | Thursday 09 April 2026 06:08:03 +0000 (0:00:02.383) 0:57:05.870 ******** 2026-04-09 06:08:39.707268 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-5 2026-04-09 06:08:39.707279 | orchestrator | 2026-04-09 06:08:39.707290 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-09 06:08:39.707301 | orchestrator | Thursday 09 April 2026 06:08:05 +0000 (0:00:01.225) 0:57:07.095 ******** 2026-04-09 06:08:39.707312 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:08:39.707323 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:08:39.707334 | orchestrator | 2026-04-09 06:08:39.707345 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-09 06:08:39.707357 | orchestrator | Thursday 09 April 2026 06:08:06 +0000 (0:00:01.228) 0:57:08.324 ******** 2026-04-09 06:08:39.707368 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:08:39.707379 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:08:39.707389 | orchestrator | 2026-04-09 06:08:39.707401 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-09 06:08:39.707412 | orchestrator | Thursday 09 April 2026 06:08:07 +0000 (0:00:01.233) 0:57:09.558 ******** 2026-04-09 06:08:39.707423 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 06:08:39.707434 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 06:08:39.707445 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 06:08:39.707456 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 06:08:39.707467 | orchestrator | 2026-04-09 06:08:39.707478 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-09 06:08:39.707490 | orchestrator | Thursday 09 April 2026 06:08:09 +0000 (0:00:01.910) 0:57:11.468 ******** 2026-04-09 06:08:39.707501 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:08:39.707512 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:08:39.707523 | orchestrator | 2026-04-09 06:08:39.707553 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-09 06:08:39.707564 | orchestrator | Thursday 09 April 2026 06:08:11 +0000 (0:00:01.961) 0:57:13.429 ******** 2026-04-09 06:08:39.707575 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:08:39.707586 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:08:39.707597 | orchestrator | 2026-04-09 06:08:39.707608 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-09 06:08:39.707619 | orchestrator | Thursday 09 April 2026 06:08:12 +0000 (0:00:01.302) 0:57:14.732 ******** 2026-04-09 06:08:39.707630 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:08:39.707641 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:08:39.707652 | orchestrator | 2026-04-09 06:08:39.707663 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-09 06:08:39.707674 | orchestrator | Thursday 09 April 2026 06:08:14 +0000 (0:00:01.251) 0:57:15.983 ******** 2026-04-09 06:08:39.707685 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:08:39.707696 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:08:39.707716 | orchestrator | 2026-04-09 06:08:39.707727 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-09 06:08:39.707738 | orchestrator | Thursday 09 April 2026 06:08:15 +0000 (0:00:01.335) 0:57:17.319 ******** 2026-04-09 06:08:39.707749 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-5 2026-04-09 06:08:39.707760 | orchestrator | 2026-04-09 06:08:39.707771 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-09 06:08:39.707782 | orchestrator | Thursday 09 April 2026 06:08:16 +0000 (0:00:01.229) 0:57:18.549 ******** 2026-04-09 06:08:39.707793 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:08:39.707804 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:08:39.707815 | orchestrator | 2026-04-09 06:08:39.707826 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-09 06:08:39.707879 | orchestrator | Thursday 09 April 2026 06:08:18 +0000 (0:00:01.995) 0:57:20.544 ******** 2026-04-09 06:08:39.707891 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 06:08:39.707926 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 06:08:39.707938 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 06:08:39.707949 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:08:39.707961 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 06:08:39.707972 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 06:08:39.707983 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 06:08:39.707994 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:08:39.708005 | orchestrator | 2026-04-09 06:08:39.708016 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-09 06:08:39.708027 | orchestrator | Thursday 09 April 2026 06:08:19 +0000 (0:00:01.247) 0:57:21.792 ******** 2026-04-09 06:08:39.708038 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:08:39.708049 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:08:39.708060 | orchestrator | 2026-04-09 06:08:39.708071 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-09 06:08:39.708082 | orchestrator | Thursday 09 April 2026 06:08:21 +0000 (0:00:01.244) 0:57:23.037 ******** 2026-04-09 06:08:39.708094 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:08:39.708105 | orchestrator | 2026-04-09 06:08:39.708116 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-09 06:08:39.708127 | orchestrator | Thursday 09 April 2026 06:08:22 +0000 (0:00:01.169) 0:57:24.206 ******** 2026-04-09 06:08:39.708138 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:08:39.708149 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:08:39.708160 | orchestrator | 2026-04-09 06:08:39.708171 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-09 06:08:39.708182 | orchestrator | Thursday 09 April 2026 06:08:23 +0000 (0:00:01.249) 0:57:25.456 ******** 2026-04-09 06:08:39.708193 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:08:39.708204 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:08:39.708215 | orchestrator | 2026-04-09 06:08:39.708227 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-09 06:08:39.708238 | orchestrator | Thursday 09 April 2026 06:08:24 +0000 (0:00:01.273) 0:57:26.729 ******** 2026-04-09 06:08:39.708249 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:08:39.708260 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:08:39.708271 | orchestrator | 2026-04-09 06:08:39.708282 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-09 06:08:39.708293 | orchestrator | Thursday 09 April 2026 06:08:26 +0000 (0:00:01.280) 0:57:28.010 ******** 2026-04-09 06:08:39.708304 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:08:39.708315 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:08:39.708326 | orchestrator | 2026-04-09 06:08:39.708345 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-09 06:08:39.708356 | orchestrator | Thursday 09 April 2026 06:08:28 +0000 (0:00:02.668) 0:57:30.679 ******** 2026-04-09 06:08:39.708367 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:08:39.708379 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:08:39.708390 | orchestrator | 2026-04-09 06:08:39.708401 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-09 06:08:39.708412 | orchestrator | Thursday 09 April 2026 06:08:30 +0000 (0:00:01.353) 0:57:32.032 ******** 2026-04-09 06:08:39.708434 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-5 2026-04-09 06:08:39.708448 | orchestrator | 2026-04-09 06:08:39.708459 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-09 06:08:39.708470 | orchestrator | Thursday 09 April 2026 06:08:31 +0000 (0:00:01.401) 0:57:33.434 ******** 2026-04-09 06:08:39.708481 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:08:39.708493 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:08:39.708504 | orchestrator | 2026-04-09 06:08:39.708531 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-09 06:08:39.708543 | orchestrator | Thursday 09 April 2026 06:08:32 +0000 (0:00:01.282) 0:57:34.716 ******** 2026-04-09 06:08:39.708554 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:08:39.708565 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:08:39.708576 | orchestrator | 2026-04-09 06:08:39.708588 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-09 06:08:39.708599 | orchestrator | Thursday 09 April 2026 06:08:34 +0000 (0:00:01.271) 0:57:35.988 ******** 2026-04-09 06:08:39.708610 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:08:39.708621 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:08:39.708632 | orchestrator | 2026-04-09 06:08:39.708643 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-09 06:08:39.708654 | orchestrator | Thursday 09 April 2026 06:08:35 +0000 (0:00:01.270) 0:57:37.258 ******** 2026-04-09 06:08:39.708665 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:08:39.708676 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:08:39.708687 | orchestrator | 2026-04-09 06:08:39.708698 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-09 06:08:39.708709 | orchestrator | Thursday 09 April 2026 06:08:36 +0000 (0:00:01.553) 0:57:38.812 ******** 2026-04-09 06:08:39.708720 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:08:39.708731 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:08:39.708742 | orchestrator | 2026-04-09 06:08:39.708753 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-09 06:08:39.708764 | orchestrator | Thursday 09 April 2026 06:08:38 +0000 (0:00:01.248) 0:57:40.061 ******** 2026-04-09 06:08:39.708775 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:08:39.708786 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:08:39.708797 | orchestrator | 2026-04-09 06:08:39.708808 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-09 06:08:39.708819 | orchestrator | Thursday 09 April 2026 06:08:39 +0000 (0:00:01.263) 0:57:41.324 ******** 2026-04-09 06:08:39.708830 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:08:39.708863 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:08:39.708874 | orchestrator | 2026-04-09 06:08:39.708893 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-09 06:09:18.832991 | orchestrator | Thursday 09 April 2026 06:08:40 +0000 (0:00:01.304) 0:57:42.629 ******** 2026-04-09 06:09:18.833151 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:09:18.833178 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:09:18.833198 | orchestrator | 2026-04-09 06:09:18.833218 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-09 06:09:18.833237 | orchestrator | Thursday 09 April 2026 06:08:42 +0000 (0:00:01.265) 0:57:43.895 ******** 2026-04-09 06:09:18.833257 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:09:18.833302 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:09:18.833314 | orchestrator | 2026-04-09 06:09:18.833326 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-09 06:09:18.833337 | orchestrator | Thursday 09 April 2026 06:08:43 +0000 (0:00:01.267) 0:57:45.163 ******** 2026-04-09 06:09:18.833349 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-5 2026-04-09 06:09:18.833361 | orchestrator | 2026-04-09 06:09:18.833372 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-09 06:09:18.833383 | orchestrator | Thursday 09 April 2026 06:08:44 +0000 (0:00:01.539) 0:57:46.703 ******** 2026-04-09 06:09:18.833394 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-04-09 06:09:18.833406 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-04-09 06:09:18.833426 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-09 06:09:18.833445 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-09 06:09:18.833465 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-09 06:09:18.833484 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-09 06:09:18.833504 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-09 06:09:18.833523 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-09 06:09:18.833543 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-09 06:09:18.833563 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-09 06:09:18.833577 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-09 06:09:18.833590 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-09 06:09:18.833604 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-09 06:09:18.833616 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-09 06:09:18.833629 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-09 06:09:18.833642 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-09 06:09:18.833655 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 06:09:18.833668 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 06:09:18.833681 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 06:09:18.833694 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 06:09:18.833707 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 06:09:18.833721 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 06:09:18.833734 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 06:09:18.833746 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 06:09:18.833759 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 06:09:18.833772 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 06:09:18.833785 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 06:09:18.833819 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 06:09:18.833907 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-04-09 06:09:18.833921 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-04-09 06:09:18.833932 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-04-09 06:09:18.833943 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-04-09 06:09:18.833954 | orchestrator | 2026-04-09 06:09:18.833966 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-09 06:09:18.833977 | orchestrator | Thursday 09 April 2026 06:08:51 +0000 (0:00:06.676) 0:57:53.379 ******** 2026-04-09 06:09:18.833988 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-5 2026-04-09 06:09:18.833999 | orchestrator | 2026-04-09 06:09:18.834090 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-09 06:09:18.834103 | orchestrator | Thursday 09 April 2026 06:08:52 +0000 (0:00:01.241) 0:57:54.620 ******** 2026-04-09 06:09:18.834114 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 06:09:18.834128 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 06:09:18.834138 | orchestrator | 2026-04-09 06:09:18.834148 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-09 06:09:18.834158 | orchestrator | Thursday 09 April 2026 06:08:54 +0000 (0:00:01.656) 0:57:56.276 ******** 2026-04-09 06:09:18.834167 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 06:09:18.834177 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 06:09:18.834187 | orchestrator | 2026-04-09 06:09:18.834197 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-09 06:09:18.834231 | orchestrator | Thursday 09 April 2026 06:08:56 +0000 (0:00:02.082) 0:57:58.359 ******** 2026-04-09 06:09:18.834241 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:09:18.834251 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:09:18.834261 | orchestrator | 2026-04-09 06:09:18.834271 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-09 06:09:18.834281 | orchestrator | Thursday 09 April 2026 06:08:57 +0000 (0:00:01.259) 0:57:59.618 ******** 2026-04-09 06:09:18.834291 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:09:18.834301 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:09:18.834311 | orchestrator | 2026-04-09 06:09:18.834321 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-09 06:09:18.834330 | orchestrator | Thursday 09 April 2026 06:08:59 +0000 (0:00:01.345) 0:58:00.964 ******** 2026-04-09 06:09:18.834340 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:09:18.834350 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:09:18.834360 | orchestrator | 2026-04-09 06:09:18.834370 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-09 06:09:18.834380 | orchestrator | Thursday 09 April 2026 06:09:00 +0000 (0:00:01.248) 0:58:02.213 ******** 2026-04-09 06:09:18.834390 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:09:18.834400 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:09:18.834409 | orchestrator | 2026-04-09 06:09:18.834419 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-09 06:09:18.834429 | orchestrator | Thursday 09 April 2026 06:09:01 +0000 (0:00:01.220) 0:58:03.433 ******** 2026-04-09 06:09:18.834439 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:09:18.834449 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:09:18.834459 | orchestrator | 2026-04-09 06:09:18.834468 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-09 06:09:18.834479 | orchestrator | Thursday 09 April 2026 06:09:02 +0000 (0:00:01.307) 0:58:04.741 ******** 2026-04-09 06:09:18.834489 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:09:18.834498 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:09:18.834508 | orchestrator | 2026-04-09 06:09:18.834518 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-09 06:09:18.834528 | orchestrator | Thursday 09 April 2026 06:09:04 +0000 (0:00:01.241) 0:58:05.983 ******** 2026-04-09 06:09:18.834538 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:09:18.834548 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:09:18.834558 | orchestrator | 2026-04-09 06:09:18.834567 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-09 06:09:18.834577 | orchestrator | Thursday 09 April 2026 06:09:05 +0000 (0:00:01.597) 0:58:07.580 ******** 2026-04-09 06:09:18.834595 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:09:18.834605 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:09:18.834614 | orchestrator | 2026-04-09 06:09:18.834624 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-09 06:09:18.834634 | orchestrator | Thursday 09 April 2026 06:09:06 +0000 (0:00:01.230) 0:58:08.811 ******** 2026-04-09 06:09:18.834644 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:09:18.834654 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:09:18.834664 | orchestrator | 2026-04-09 06:09:18.834673 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-09 06:09:18.834684 | orchestrator | Thursday 09 April 2026 06:09:08 +0000 (0:00:01.246) 0:58:10.058 ******** 2026-04-09 06:09:18.834693 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:09:18.834703 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:09:18.834713 | orchestrator | 2026-04-09 06:09:18.834723 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-09 06:09:18.834733 | orchestrator | Thursday 09 April 2026 06:09:09 +0000 (0:00:01.222) 0:58:11.280 ******** 2026-04-09 06:09:18.834742 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:09:18.834759 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:09:18.834769 | orchestrator | 2026-04-09 06:09:18.834779 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-09 06:09:18.834789 | orchestrator | Thursday 09 April 2026 06:09:10 +0000 (0:00:01.274) 0:58:12.554 ******** 2026-04-09 06:09:18.834798 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-09 06:09:18.834808 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-04-09 06:09:18.834818 | orchestrator | 2026-04-09 06:09:18.834828 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-09 06:09:18.834855 | orchestrator | Thursday 09 April 2026 06:09:15 +0000 (0:00:04.524) 0:58:17.078 ******** 2026-04-09 06:09:18.834865 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 06:09:18.834875 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 06:09:18.834885 | orchestrator | 2026-04-09 06:09:18.834894 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-09 06:09:18.834904 | orchestrator | Thursday 09 April 2026 06:09:16 +0000 (0:00:01.384) 0:58:18.463 ******** 2026-04-09 06:09:18.834917 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-04-09 06:09:18.834938 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-04-09 06:10:08.704709 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-04-09 06:10:08.704917 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-04-09 06:10:08.704982 | orchestrator | 2026-04-09 06:10:08.705007 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-09 06:10:08.705029 | orchestrator | Thursday 09 April 2026 06:09:21 +0000 (0:00:05.128) 0:58:23.592 ******** 2026-04-09 06:10:08.705048 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:10:08.705070 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:10:08.705084 | orchestrator | 2026-04-09 06:10:08.705095 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-09 06:10:08.705107 | orchestrator | Thursday 09 April 2026 06:09:22 +0000 (0:00:01.259) 0:58:24.852 ******** 2026-04-09 06:10:08.705118 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:10:08.705128 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:10:08.705139 | orchestrator | 2026-04-09 06:10:08.705151 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 06:10:08.705164 | orchestrator | Thursday 09 April 2026 06:09:24 +0000 (0:00:01.268) 0:58:26.121 ******** 2026-04-09 06:10:08.705175 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:10:08.705186 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:10:08.705196 | orchestrator | 2026-04-09 06:10:08.705207 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 06:10:08.705218 | orchestrator | Thursday 09 April 2026 06:09:25 +0000 (0:00:01.284) 0:58:27.405 ******** 2026-04-09 06:10:08.705228 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:10:08.705239 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:10:08.705250 | orchestrator | 2026-04-09 06:10:08.705260 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 06:10:08.705271 | orchestrator | Thursday 09 April 2026 06:09:26 +0000 (0:00:01.237) 0:58:28.642 ******** 2026-04-09 06:10:08.705282 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:10:08.705292 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:10:08.705303 | orchestrator | 2026-04-09 06:10:08.705313 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 06:10:08.705324 | orchestrator | Thursday 09 April 2026 06:09:28 +0000 (0:00:01.254) 0:58:29.896 ******** 2026-04-09 06:10:08.705335 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:10:08.705346 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:10:08.705357 | orchestrator | 2026-04-09 06:10:08.705367 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 06:10:08.705378 | orchestrator | Thursday 09 April 2026 06:09:29 +0000 (0:00:01.789) 0:58:31.685 ******** 2026-04-09 06:10:08.705388 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 06:10:08.705399 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 06:10:08.705425 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 06:10:08.705436 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:10:08.705447 | orchestrator | 2026-04-09 06:10:08.705458 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 06:10:08.705469 | orchestrator | Thursday 09 April 2026 06:09:31 +0000 (0:00:01.447) 0:58:33.133 ******** 2026-04-09 06:10:08.705480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 06:10:08.705490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 06:10:08.705501 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 06:10:08.705512 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:10:08.705523 | orchestrator | 2026-04-09 06:10:08.705533 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 06:10:08.705544 | orchestrator | Thursday 09 April 2026 06:09:32 +0000 (0:00:01.439) 0:58:34.573 ******** 2026-04-09 06:10:08.705554 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 06:10:08.705565 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 06:10:08.705576 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 06:10:08.705595 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:10:08.705606 | orchestrator | 2026-04-09 06:10:08.705617 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 06:10:08.705627 | orchestrator | Thursday 09 April 2026 06:09:34 +0000 (0:00:01.547) 0:58:36.120 ******** 2026-04-09 06:10:08.705638 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:10:08.705649 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:10:08.705660 | orchestrator | 2026-04-09 06:10:08.705670 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 06:10:08.705681 | orchestrator | Thursday 09 April 2026 06:09:35 +0000 (0:00:01.279) 0:58:37.400 ******** 2026-04-09 06:10:08.705692 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-09 06:10:08.705703 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-09 06:10:08.705713 | orchestrator | 2026-04-09 06:10:08.705724 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-09 06:10:08.705735 | orchestrator | Thursday 09 April 2026 06:09:37 +0000 (0:00:01.503) 0:58:38.904 ******** 2026-04-09 06:10:08.705745 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:10:08.705756 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:10:08.705767 | orchestrator | 2026-04-09 06:10:08.705796 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-09 06:10:08.705808 | orchestrator | Thursday 09 April 2026 06:09:39 +0000 (0:00:02.051) 0:58:40.956 ******** 2026-04-09 06:10:08.705818 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:10:08.705857 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:10:08.705869 | orchestrator | 2026-04-09 06:10:08.705880 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-09 06:10:08.705891 | orchestrator | Thursday 09 April 2026 06:09:40 +0000 (0:00:01.273) 0:58:42.229 ******** 2026-04-09 06:10:08.705901 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-5 2026-04-09 06:10:08.705913 | orchestrator | 2026-04-09 06:10:08.705923 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-09 06:10:08.705934 | orchestrator | Thursday 09 April 2026 06:09:41 +0000 (0:00:01.207) 0:58:43.437 ******** 2026-04-09 06:10:08.705945 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-09 06:10:08.705955 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-09 06:10:08.705966 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-09 06:10:08.705976 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-09 06:10:08.705987 | orchestrator | 2026-04-09 06:10:08.705998 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-09 06:10:08.706008 | orchestrator | Thursday 09 April 2026 06:09:43 +0000 (0:00:02.024) 0:58:45.462 ******** 2026-04-09 06:10:08.706076 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 06:10:08.706090 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 06:10:08.706101 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 06:10:08.706112 | orchestrator | 2026-04-09 06:10:08.706122 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-09 06:10:08.706133 | orchestrator | Thursday 09 April 2026 06:09:46 +0000 (0:00:03.207) 0:58:48.669 ******** 2026-04-09 06:10:08.706144 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-09 06:10:08.706165 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 06:10:08.706176 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:10:08.706187 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-09 06:10:08.706198 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-09 06:10:08.706208 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:10:08.706219 | orchestrator | 2026-04-09 06:10:08.706230 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-09 06:10:08.706241 | orchestrator | Thursday 09 April 2026 06:09:48 +0000 (0:00:02.176) 0:58:50.846 ******** 2026-04-09 06:10:08.706260 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:10:08.706271 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:10:08.706282 | orchestrator | 2026-04-09 06:10:08.706293 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-09 06:10:08.706304 | orchestrator | Thursday 09 April 2026 06:09:50 +0000 (0:00:01.718) 0:58:52.565 ******** 2026-04-09 06:10:08.706314 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:10:08.706325 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:10:08.706336 | orchestrator | 2026-04-09 06:10:08.706347 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-09 06:10:08.706358 | orchestrator | Thursday 09 April 2026 06:09:51 +0000 (0:00:01.280) 0:58:53.846 ******** 2026-04-09 06:10:08.706368 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-5 2026-04-09 06:10:08.706379 | orchestrator | 2026-04-09 06:10:08.706397 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-09 06:10:08.706408 | orchestrator | Thursday 09 April 2026 06:09:53 +0000 (0:00:01.266) 0:58:55.112 ******** 2026-04-09 06:10:08.706418 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-5 2026-04-09 06:10:08.706429 | orchestrator | 2026-04-09 06:10:08.706440 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-09 06:10:08.706450 | orchestrator | Thursday 09 April 2026 06:09:54 +0000 (0:00:01.200) 0:58:56.313 ******** 2026-04-09 06:10:08.706461 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:10:08.706472 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:10:08.706483 | orchestrator | 2026-04-09 06:10:08.706494 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-09 06:10:08.706505 | orchestrator | Thursday 09 April 2026 06:09:56 +0000 (0:00:02.140) 0:58:58.453 ******** 2026-04-09 06:10:08.706515 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:10:08.706526 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:10:08.706537 | orchestrator | 2026-04-09 06:10:08.706548 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-09 06:10:08.706559 | orchestrator | Thursday 09 April 2026 06:09:58 +0000 (0:00:02.375) 0:59:00.829 ******** 2026-04-09 06:10:08.706569 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:10:08.706580 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:10:08.706591 | orchestrator | 2026-04-09 06:10:08.706602 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-09 06:10:08.706613 | orchestrator | Thursday 09 April 2026 06:10:01 +0000 (0:00:02.391) 0:59:03.221 ******** 2026-04-09 06:10:08.706624 | orchestrator | changed: [testbed-node-3] 2026-04-09 06:10:08.706634 | orchestrator | changed: [testbed-node-5] 2026-04-09 06:10:08.706645 | orchestrator | 2026-04-09 06:10:08.706656 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-09 06:10:08.706667 | orchestrator | Thursday 09 April 2026 06:10:04 +0000 (0:00:03.513) 0:59:06.734 ******** 2026-04-09 06:10:08.706677 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:10:08.706688 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:10:08.706699 | orchestrator | 2026-04-09 06:10:08.706710 | orchestrator | TASK [Set max_mds] ************************************************************* 2026-04-09 06:10:08.706720 | orchestrator | Thursday 09 April 2026 06:10:06 +0000 (0:00:01.774) 0:59:08.509 ******** 2026-04-09 06:10:08.706731 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:10:08.706751 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-09 06:10:32.833301 | orchestrator | 2026-04-09 06:10:32.833450 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-04-09 06:10:32.833482 | orchestrator | 2026-04-09 06:10:32.833502 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 06:10:32.833514 | orchestrator | Thursday 09 April 2026 06:10:10 +0000 (0:00:03.369) 0:59:11.878 ******** 2026-04-09 06:10:32.833525 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-04-09 06:10:32.833536 | orchestrator | 2026-04-09 06:10:32.833572 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-09 06:10:32.833584 | orchestrator | Thursday 09 April 2026 06:10:11 +0000 (0:00:01.297) 0:59:13.176 ******** 2026-04-09 06:10:32.833595 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:10:32.833607 | orchestrator | 2026-04-09 06:10:32.833618 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-09 06:10:32.833629 | orchestrator | Thursday 09 April 2026 06:10:12 +0000 (0:00:01.466) 0:59:14.642 ******** 2026-04-09 06:10:32.833640 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:10:32.833651 | orchestrator | 2026-04-09 06:10:32.833662 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 06:10:32.833673 | orchestrator | Thursday 09 April 2026 06:10:13 +0000 (0:00:01.127) 0:59:15.769 ******** 2026-04-09 06:10:32.833683 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:10:32.833694 | orchestrator | 2026-04-09 06:10:32.833705 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 06:10:32.833716 | orchestrator | Thursday 09 April 2026 06:10:15 +0000 (0:00:01.412) 0:59:17.182 ******** 2026-04-09 06:10:32.833727 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:10:32.833737 | orchestrator | 2026-04-09 06:10:32.833748 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-09 06:10:32.833759 | orchestrator | Thursday 09 April 2026 06:10:16 +0000 (0:00:01.178) 0:59:18.361 ******** 2026-04-09 06:10:32.833769 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:10:32.833780 | orchestrator | 2026-04-09 06:10:32.833791 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-09 06:10:32.833802 | orchestrator | Thursday 09 April 2026 06:10:17 +0000 (0:00:01.186) 0:59:19.547 ******** 2026-04-09 06:10:32.833813 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:10:32.833857 | orchestrator | 2026-04-09 06:10:32.833875 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-09 06:10:32.833890 | orchestrator | Thursday 09 April 2026 06:10:18 +0000 (0:00:01.213) 0:59:20.760 ******** 2026-04-09 06:10:32.833903 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:10:32.833917 | orchestrator | 2026-04-09 06:10:32.833930 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-09 06:10:32.833942 | orchestrator | Thursday 09 April 2026 06:10:20 +0000 (0:00:01.134) 0:59:21.895 ******** 2026-04-09 06:10:32.833954 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:10:32.833966 | orchestrator | 2026-04-09 06:10:32.833979 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-09 06:10:32.833990 | orchestrator | Thursday 09 April 2026 06:10:21 +0000 (0:00:01.240) 0:59:23.136 ******** 2026-04-09 06:10:32.834003 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 06:10:32.834074 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 06:10:32.834098 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 06:10:32.834116 | orchestrator | 2026-04-09 06:10:32.834138 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-09 06:10:32.834176 | orchestrator | Thursday 09 April 2026 06:10:23 +0000 (0:00:02.039) 0:59:25.176 ******** 2026-04-09 06:10:32.834190 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:10:32.834201 | orchestrator | 2026-04-09 06:10:32.834211 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-09 06:10:32.834222 | orchestrator | Thursday 09 April 2026 06:10:24 +0000 (0:00:01.272) 0:59:26.449 ******** 2026-04-09 06:10:32.834233 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 06:10:32.834243 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 06:10:32.834254 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 06:10:32.834264 | orchestrator | 2026-04-09 06:10:32.834275 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-09 06:10:32.834296 | orchestrator | Thursday 09 April 2026 06:10:27 +0000 (0:00:03.243) 0:59:29.693 ******** 2026-04-09 06:10:32.834307 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-09 06:10:32.834319 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-09 06:10:32.834330 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-09 06:10:32.834341 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:10:32.834352 | orchestrator | 2026-04-09 06:10:32.834363 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-09 06:10:32.834374 | orchestrator | Thursday 09 April 2026 06:10:29 +0000 (0:00:01.866) 0:59:31.559 ******** 2026-04-09 06:10:32.834387 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 06:10:32.834402 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 06:10:32.834434 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 06:10:32.834446 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:10:32.834457 | orchestrator | 2026-04-09 06:10:32.834468 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-09 06:10:32.834479 | orchestrator | Thursday 09 April 2026 06:10:31 +0000 (0:00:01.707) 0:59:33.267 ******** 2026-04-09 06:10:32.834491 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 06:10:32.834505 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 06:10:32.834517 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 06:10:32.834528 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:10:32.834539 | orchestrator | 2026-04-09 06:10:32.834550 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-09 06:10:32.834561 | orchestrator | Thursday 09 April 2026 06:10:32 +0000 (0:00:01.202) 0:59:34.469 ******** 2026-04-09 06:10:32.834580 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '69d38aa54653', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-09 06:10:25.501839', 'end': '2026-04-09 06:10:25.556586', 'delta': '0:00:00.054747', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['69d38aa54653'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-09 06:10:32.834602 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '3e7867c40460', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-09 06:10:26.060319', 'end': '2026-04-09 06:10:26.098833', 'delta': '0:00:00.038514', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3e7867c40460'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-09 06:10:32.834614 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '5ed6058fb18c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-09 06:10:26.611947', 'end': '2026-04-09 06:10:26.652187', 'delta': '0:00:00.040240', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5ed6058fb18c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-09 06:10:32.834625 | orchestrator | 2026-04-09 06:10:32.834644 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-09 06:10:51.535248 | orchestrator | Thursday 09 April 2026 06:10:33 +0000 (0:00:01.226) 0:59:35.695 ******** 2026-04-09 06:10:51.535372 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:10:51.535390 | orchestrator | 2026-04-09 06:10:51.535403 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-09 06:10:51.535415 | orchestrator | Thursday 09 April 2026 06:10:35 +0000 (0:00:01.308) 0:59:37.004 ******** 2026-04-09 06:10:51.535426 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:10:51.535438 | orchestrator | 2026-04-09 06:10:51.535450 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-09 06:10:51.535461 | orchestrator | Thursday 09 April 2026 06:10:36 +0000 (0:00:01.261) 0:59:38.266 ******** 2026-04-09 06:10:51.535472 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:10:51.535483 | orchestrator | 2026-04-09 06:10:51.535494 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-09 06:10:51.535505 | orchestrator | Thursday 09 April 2026 06:10:37 +0000 (0:00:01.166) 0:59:39.433 ******** 2026-04-09 06:10:51.535517 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 06:10:51.535528 | orchestrator | 2026-04-09 06:10:51.535539 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 06:10:51.535550 | orchestrator | Thursday 09 April 2026 06:10:39 +0000 (0:00:02.031) 0:59:41.464 ******** 2026-04-09 06:10:51.535561 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:10:51.535572 | orchestrator | 2026-04-09 06:10:51.535583 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-09 06:10:51.535594 | orchestrator | Thursday 09 April 2026 06:10:40 +0000 (0:00:01.198) 0:59:42.662 ******** 2026-04-09 06:10:51.535605 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:10:51.535616 | orchestrator | 2026-04-09 06:10:51.535627 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-09 06:10:51.535638 | orchestrator | Thursday 09 April 2026 06:10:41 +0000 (0:00:01.120) 0:59:43.783 ******** 2026-04-09 06:10:51.535649 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:10:51.535660 | orchestrator | 2026-04-09 06:10:51.535671 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 06:10:51.535707 | orchestrator | Thursday 09 April 2026 06:10:43 +0000 (0:00:01.291) 0:59:45.075 ******** 2026-04-09 06:10:51.535720 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:10:51.535731 | orchestrator | 2026-04-09 06:10:51.535742 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-09 06:10:51.535754 | orchestrator | Thursday 09 April 2026 06:10:44 +0000 (0:00:01.124) 0:59:46.199 ******** 2026-04-09 06:10:51.535767 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:10:51.535780 | orchestrator | 2026-04-09 06:10:51.535793 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-09 06:10:51.535806 | orchestrator | Thursday 09 April 2026 06:10:45 +0000 (0:00:01.158) 0:59:47.357 ******** 2026-04-09 06:10:51.535820 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:10:51.535865 | orchestrator | 2026-04-09 06:10:51.535878 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-09 06:10:51.535891 | orchestrator | Thursday 09 April 2026 06:10:46 +0000 (0:00:01.273) 0:59:48.631 ******** 2026-04-09 06:10:51.535904 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:10:51.535916 | orchestrator | 2026-04-09 06:10:51.535929 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-09 06:10:51.535943 | orchestrator | Thursday 09 April 2026 06:10:47 +0000 (0:00:01.129) 0:59:49.760 ******** 2026-04-09 06:10:51.535956 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:10:51.535968 | orchestrator | 2026-04-09 06:10:51.535997 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-09 06:10:51.536012 | orchestrator | Thursday 09 April 2026 06:10:49 +0000 (0:00:01.168) 0:59:50.929 ******** 2026-04-09 06:10:51.536025 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:10:51.536038 | orchestrator | 2026-04-09 06:10:51.536050 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-09 06:10:51.536064 | orchestrator | Thursday 09 April 2026 06:10:50 +0000 (0:00:01.082) 0:59:52.012 ******** 2026-04-09 06:10:51.536077 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:10:51.536090 | orchestrator | 2026-04-09 06:10:51.536103 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-09 06:10:51.536117 | orchestrator | Thursday 09 April 2026 06:10:51 +0000 (0:00:01.202) 0:59:53.214 ******** 2026-04-09 06:10:51.536131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:10:51.536147 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1db77c01--2d77--5e1e--8d0a--4e535706b141-osd--block--1db77c01--2d77--5e1e--8d0a--4e535706b141', 'dm-uuid-LVM-Vd5rxgKUs73TU4Fbsf9r5IJx2JqOBc0dUhLn892OVFegRXfoAocTUnq1hUBwAqbQ'], 'uuids': ['9adc5058-59dc-41de-adf6-afc54c646e02'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '162ed735', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['UhLn89-2OVF-egRX-foAo-cTUn-q1hU-BwAqbQ']}})  2026-04-09 06:10:51.536181 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be', 'scsi-SQEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5d5b0f3e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 06:10:51.536223 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-e4gHGq-azk6-pcuI-7Nw2-ZeJR-MqdR-RoIdn8', 'scsi-0QEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761', 'scsi-SQEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f5862b72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2f59a7c8--f88e--51a3--9620--37640e0ff9b5-osd--block--2f59a7c8--f88e--51a3--9620--37640e0ff9b5']}})  2026-04-09 06:10:51.536237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:10:51.536262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:10:51.536280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-11-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-09 06:10:51.536293 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:10:51.536305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-YYkPY1-YV0V-Afba-Bksc-BIap-C0b1-XJToPw', 'dm-uuid-CRYPT-LUKS2-34a00b1693eb41a48240b70c6fb1290d-YYkPY1-YV0V-Afba-Bksc-BIap-C0b1-XJToPw'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-09 06:10:51.536324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:10:52.951219 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2f59a7c8--f88e--51a3--9620--37640e0ff9b5-osd--block--2f59a7c8--f88e--51a3--9620--37640e0ff9b5', 'dm-uuid-LVM-oNDzr1Rndp1i5vNhITRGHxSNPadq9yP2YYkPY1YV0VAfbaBkscBIapC0b1XJToPw'], 'uuids': ['34a00b16-93eb-41a4-8240-b70c6fb1290d'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f5862b72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['YYkPY1-YV0V-Afba-Bksc-BIap-C0b1-XJToPw']}})  2026-04-09 06:10:52.951350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0XRV4m-9HLY-z9SL-9jLS-42la-BMLC-ZGW5lb', 'scsi-0QEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad', 'scsi-SQEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '162ed735', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1db77c01--2d77--5e1e--8d0a--4e535706b141-osd--block--1db77c01--2d77--5e1e--8d0a--4e535706b141']}})  2026-04-09 06:10:52.951369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:10:52.951404 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0bd1f840', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part16', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part14', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part15', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part1', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 06:10:52.951445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:10:52.951459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:10:52.951471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-UhLn89-2OVF-egRX-foAo-cTUn-q1hU-BwAqbQ', 'dm-uuid-CRYPT-LUKS2-9adc505859dc41deadf6afc54c646e02-UhLn89-2OVF-egRX-foAo-cTUn-q1hU-BwAqbQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-09 06:10:52.951484 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:10:52.951497 | orchestrator | 2026-04-09 06:10:52.951509 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-09 06:10:52.951522 | orchestrator | Thursday 09 April 2026 06:10:52 +0000 (0:00:01.464) 0:59:54.679 ******** 2026-04-09 06:10:52.951535 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:10:52.951554 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1db77c01--2d77--5e1e--8d0a--4e535706b141-osd--block--1db77c01--2d77--5e1e--8d0a--4e535706b141', 'dm-uuid-LVM-Vd5rxgKUs73TU4Fbsf9r5IJx2JqOBc0dUhLn892OVFegRXfoAocTUnq1hUBwAqbQ'], 'uuids': ['9adc5058-59dc-41de-adf6-afc54c646e02'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '162ed735', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['UhLn89-2OVF-egRX-foAo-cTUn-q1hU-BwAqbQ']}}, 'ansible_loop_var': 'item'})  2026-04-09 06:10:52.951566 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be', 'scsi-SQEMU_QEMU_HARDDISK_5d5b0f3e-c55a-4f41-a738-3802883821be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5d5b0f3e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:10:52.951595 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-e4gHGq-azk6-pcuI-7Nw2-ZeJR-MqdR-RoIdn8', 'scsi-0QEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761', 'scsi-SQEMU_QEMU_HARDDISK_f5862b72-8b25-453b-aa97-7293a3d52761'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f5862b72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2f59a7c8--f88e--51a3--9620--37640e0ff9b5-osd--block--2f59a7c8--f88e--51a3--9620--37640e0ff9b5']}}, 'ansible_loop_var': 'item'})  2026-04-09 06:10:53.076716 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:10:53.076809 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:10:53.076864 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-11-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:10:53.076879 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:10:53.076891 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-YYkPY1-YV0V-Afba-Bksc-BIap-C0b1-XJToPw', 'dm-uuid-CRYPT-LUKS2-34a00b1693eb41a48240b70c6fb1290d-YYkPY1-YV0V-Afba-Bksc-BIap-C0b1-XJToPw'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:10:53.076925 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:10:53.076956 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2f59a7c8--f88e--51a3--9620--37640e0ff9b5-osd--block--2f59a7c8--f88e--51a3--9620--37640e0ff9b5', 'dm-uuid-LVM-oNDzr1Rndp1i5vNhITRGHxSNPadq9yP2YYkPY1YV0VAfbaBkscBIapC0b1XJToPw'], 'uuids': ['34a00b16-93eb-41a4-8240-b70c6fb1290d'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f5862b72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['YYkPY1-YV0V-Afba-Bksc-BIap-C0b1-XJToPw']}}, 'ansible_loop_var': 'item'})  2026-04-09 06:10:53.076974 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0XRV4m-9HLY-z9SL-9jLS-42la-BMLC-ZGW5lb', 'scsi-0QEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad', 'scsi-SQEMU_QEMU_HARDDISK_162ed735-fecb-4ea3-8d95-f21f614c20ad'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '162ed735', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1db77c01--2d77--5e1e--8d0a--4e535706b141-osd--block--1db77c01--2d77--5e1e--8d0a--4e535706b141']}}, 'ansible_loop_var': 'item'})  2026-04-09 06:10:53.076990 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:10:53.077011 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0bd1f840', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part16', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part14', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part15', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part1', 'scsi-SQEMU_QEMU_HARDDISK_0bd1f840-453a-48b2-ad16-1f5136864411-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:11:22.454093 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:11:22.454225 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:11:22.454244 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-UhLn89-2OVF-egRX-foAo-cTUn-q1hU-BwAqbQ', 'dm-uuid-CRYPT-LUKS2-9adc505859dc41deadf6afc54c646e02-UhLn89-2OVF-egRX-foAo-cTUn-q1hU-BwAqbQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:11:22.454279 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:11:22.454294 | orchestrator | 2026-04-09 06:11:22.454307 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-09 06:11:22.454319 | orchestrator | Thursday 09 April 2026 06:10:54 +0000 (0:00:01.421) 0:59:56.100 ******** 2026-04-09 06:11:22.454331 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:11:22.454343 | orchestrator | 2026-04-09 06:11:22.454354 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-09 06:11:22.454365 | orchestrator | Thursday 09 April 2026 06:10:55 +0000 (0:00:01.503) 0:59:57.604 ******** 2026-04-09 06:11:22.454376 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:11:22.454387 | orchestrator | 2026-04-09 06:11:22.454398 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 06:11:22.454409 | orchestrator | Thursday 09 April 2026 06:10:56 +0000 (0:00:01.187) 0:59:58.791 ******** 2026-04-09 06:11:22.454420 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:11:22.454431 | orchestrator | 2026-04-09 06:11:22.454442 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 06:11:22.454453 | orchestrator | Thursday 09 April 2026 06:10:58 +0000 (0:00:01.481) 1:00:00.273 ******** 2026-04-09 06:11:22.454464 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:11:22.454475 | orchestrator | 2026-04-09 06:11:22.454487 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 06:11:22.454499 | orchestrator | Thursday 09 April 2026 06:10:59 +0000 (0:00:01.116) 1:00:01.390 ******** 2026-04-09 06:11:22.454510 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:11:22.454521 | orchestrator | 2026-04-09 06:11:22.454532 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 06:11:22.454543 | orchestrator | Thursday 09 April 2026 06:11:00 +0000 (0:00:01.319) 1:00:02.709 ******** 2026-04-09 06:11:22.454554 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:11:22.454565 | orchestrator | 2026-04-09 06:11:22.454579 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 06:11:22.454591 | orchestrator | Thursday 09 April 2026 06:11:02 +0000 (0:00:01.182) 1:00:03.892 ******** 2026-04-09 06:11:22.454605 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-09 06:11:22.454618 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-09 06:11:22.454631 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-09 06:11:22.454644 | orchestrator | 2026-04-09 06:11:22.454657 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 06:11:22.454670 | orchestrator | Thursday 09 April 2026 06:11:04 +0000 (0:00:02.160) 1:00:06.052 ******** 2026-04-09 06:11:22.454683 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-09 06:11:22.454696 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-09 06:11:22.454709 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-09 06:11:22.454723 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:11:22.454736 | orchestrator | 2026-04-09 06:11:22.454748 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-09 06:11:22.454762 | orchestrator | Thursday 09 April 2026 06:11:05 +0000 (0:00:01.225) 1:00:07.278 ******** 2026-04-09 06:11:22.454791 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-04-09 06:11:22.454805 | orchestrator | 2026-04-09 06:11:22.454820 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 06:11:22.454854 | orchestrator | Thursday 09 April 2026 06:11:06 +0000 (0:00:01.115) 1:00:08.394 ******** 2026-04-09 06:11:22.454867 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:11:22.454880 | orchestrator | 2026-04-09 06:11:22.454893 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 06:11:22.454906 | orchestrator | Thursday 09 April 2026 06:11:07 +0000 (0:00:01.157) 1:00:09.551 ******** 2026-04-09 06:11:22.454927 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:11:22.454941 | orchestrator | 2026-04-09 06:11:22.454954 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 06:11:22.454966 | orchestrator | Thursday 09 April 2026 06:11:08 +0000 (0:00:01.137) 1:00:10.689 ******** 2026-04-09 06:11:22.454977 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:11:22.454988 | orchestrator | 2026-04-09 06:11:22.455004 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 06:11:22.455016 | orchestrator | Thursday 09 April 2026 06:11:09 +0000 (0:00:01.139) 1:00:11.829 ******** 2026-04-09 06:11:22.455027 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:11:22.455038 | orchestrator | 2026-04-09 06:11:22.455050 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 06:11:22.455061 | orchestrator | Thursday 09 April 2026 06:11:11 +0000 (0:00:01.269) 1:00:13.098 ******** 2026-04-09 06:11:22.455071 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 06:11:22.455083 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 06:11:22.455094 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 06:11:22.455105 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:11:22.455116 | orchestrator | 2026-04-09 06:11:22.455127 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 06:11:22.455138 | orchestrator | Thursday 09 April 2026 06:11:12 +0000 (0:00:01.436) 1:00:14.535 ******** 2026-04-09 06:11:22.455149 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 06:11:22.455160 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 06:11:22.455171 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 06:11:22.455182 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:11:22.455194 | orchestrator | 2026-04-09 06:11:22.455205 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 06:11:22.455216 | orchestrator | Thursday 09 April 2026 06:11:14 +0000 (0:00:01.373) 1:00:15.908 ******** 2026-04-09 06:11:22.455227 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 06:11:22.455238 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 06:11:22.455249 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 06:11:22.455260 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:11:22.455271 | orchestrator | 2026-04-09 06:11:22.455282 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 06:11:22.455293 | orchestrator | Thursday 09 April 2026 06:11:15 +0000 (0:00:01.503) 1:00:17.412 ******** 2026-04-09 06:11:22.455304 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:11:22.455315 | orchestrator | 2026-04-09 06:11:22.455326 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 06:11:22.455337 | orchestrator | Thursday 09 April 2026 06:11:16 +0000 (0:00:01.178) 1:00:18.590 ******** 2026-04-09 06:11:22.455348 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-09 06:11:22.455359 | orchestrator | 2026-04-09 06:11:22.455370 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-09 06:11:22.455381 | orchestrator | Thursday 09 April 2026 06:11:18 +0000 (0:00:01.714) 1:00:20.305 ******** 2026-04-09 06:11:22.455392 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 06:11:22.455403 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 06:11:22.455415 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 06:11:22.455426 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-09 06:11:22.455437 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 06:11:22.455448 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 06:11:22.455459 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 06:11:22.455476 | orchestrator | 2026-04-09 06:11:22.455488 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-09 06:11:22.455498 | orchestrator | Thursday 09 April 2026 06:11:20 +0000 (0:00:02.304) 1:00:22.610 ******** 2026-04-09 06:11:22.455510 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 06:11:22.455521 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 06:11:22.455532 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 06:11:22.455543 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-09 06:11:22.455554 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 06:11:22.455565 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 06:11:22.455576 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 06:11:22.455587 | orchestrator | 2026-04-09 06:11:22.455606 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-04-09 06:12:16.407318 | orchestrator | Thursday 09 April 2026 06:11:23 +0000 (0:00:02.621) 1:00:25.232 ******** 2026-04-09 06:12:16.407439 | orchestrator | changed: [testbed-node-3] 2026-04-09 06:12:16.407458 | orchestrator | 2026-04-09 06:12:16.407471 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-04-09 06:12:16.407482 | orchestrator | Thursday 09 April 2026 06:11:25 +0000 (0:00:02.260) 1:00:27.493 ******** 2026-04-09 06:12:16.407494 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 06:12:16.407507 | orchestrator | 2026-04-09 06:12:16.407519 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-04-09 06:12:16.407530 | orchestrator | Thursday 09 April 2026 06:11:28 +0000 (0:00:02.869) 1:00:30.362 ******** 2026-04-09 06:12:16.407542 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 06:12:16.407553 | orchestrator | 2026-04-09 06:12:16.407580 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 06:12:16.407592 | orchestrator | Thursday 09 April 2026 06:11:30 +0000 (0:00:02.196) 1:00:32.558 ******** 2026-04-09 06:12:16.407603 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-04-09 06:12:16.407614 | orchestrator | 2026-04-09 06:12:16.407625 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 06:12:16.407635 | orchestrator | Thursday 09 April 2026 06:11:31 +0000 (0:00:01.211) 1:00:33.770 ******** 2026-04-09 06:12:16.407646 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-04-09 06:12:16.407657 | orchestrator | 2026-04-09 06:12:16.407668 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 06:12:16.407679 | orchestrator | Thursday 09 April 2026 06:11:33 +0000 (0:00:01.150) 1:00:34.921 ******** 2026-04-09 06:12:16.407690 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:12:16.407701 | orchestrator | 2026-04-09 06:12:16.407712 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 06:12:16.407722 | orchestrator | Thursday 09 April 2026 06:11:34 +0000 (0:00:01.113) 1:00:36.034 ******** 2026-04-09 06:12:16.407734 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:12:16.407746 | orchestrator | 2026-04-09 06:12:16.407758 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 06:12:16.407768 | orchestrator | Thursday 09 April 2026 06:11:35 +0000 (0:00:01.498) 1:00:37.533 ******** 2026-04-09 06:12:16.407779 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:12:16.407790 | orchestrator | 2026-04-09 06:12:16.407800 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 06:12:16.407900 | orchestrator | Thursday 09 April 2026 06:11:37 +0000 (0:00:01.499) 1:00:39.033 ******** 2026-04-09 06:12:16.407918 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:12:16.407931 | orchestrator | 2026-04-09 06:12:16.407945 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 06:12:16.407959 | orchestrator | Thursday 09 April 2026 06:11:38 +0000 (0:00:01.570) 1:00:40.603 ******** 2026-04-09 06:12:16.407972 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:12:16.407985 | orchestrator | 2026-04-09 06:12:16.407998 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 06:12:16.408009 | orchestrator | Thursday 09 April 2026 06:11:39 +0000 (0:00:01.164) 1:00:41.768 ******** 2026-04-09 06:12:16.408020 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:12:16.408031 | orchestrator | 2026-04-09 06:12:16.408042 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 06:12:16.408053 | orchestrator | Thursday 09 April 2026 06:11:41 +0000 (0:00:01.231) 1:00:42.999 ******** 2026-04-09 06:12:16.408064 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:12:16.408074 | orchestrator | 2026-04-09 06:12:16.408085 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 06:12:16.408096 | orchestrator | Thursday 09 April 2026 06:11:42 +0000 (0:00:01.143) 1:00:44.143 ******** 2026-04-09 06:12:16.408107 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:12:16.408118 | orchestrator | 2026-04-09 06:12:16.408128 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 06:12:16.408139 | orchestrator | Thursday 09 April 2026 06:11:43 +0000 (0:00:01.526) 1:00:45.669 ******** 2026-04-09 06:12:16.408150 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:12:16.408161 | orchestrator | 2026-04-09 06:12:16.408171 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 06:12:16.408182 | orchestrator | Thursday 09 April 2026 06:11:45 +0000 (0:00:01.577) 1:00:47.247 ******** 2026-04-09 06:12:16.408193 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:12:16.408204 | orchestrator | 2026-04-09 06:12:16.408215 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 06:12:16.408225 | orchestrator | Thursday 09 April 2026 06:11:46 +0000 (0:00:01.164) 1:00:48.412 ******** 2026-04-09 06:12:16.408236 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:12:16.408247 | orchestrator | 2026-04-09 06:12:16.408258 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 06:12:16.408269 | orchestrator | Thursday 09 April 2026 06:11:47 +0000 (0:00:01.163) 1:00:49.576 ******** 2026-04-09 06:12:16.408279 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:12:16.408290 | orchestrator | 2026-04-09 06:12:16.408301 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 06:12:16.408312 | orchestrator | Thursday 09 April 2026 06:11:48 +0000 (0:00:01.150) 1:00:50.726 ******** 2026-04-09 06:12:16.408323 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:12:16.408333 | orchestrator | 2026-04-09 06:12:16.408344 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 06:12:16.408354 | orchestrator | Thursday 09 April 2026 06:11:49 +0000 (0:00:01.142) 1:00:51.868 ******** 2026-04-09 06:12:16.408365 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:12:16.408376 | orchestrator | 2026-04-09 06:12:16.408404 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 06:12:16.408416 | orchestrator | Thursday 09 April 2026 06:11:51 +0000 (0:00:01.199) 1:00:53.068 ******** 2026-04-09 06:12:16.408427 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:12:16.408438 | orchestrator | 2026-04-09 06:12:16.408449 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 06:12:16.408460 | orchestrator | Thursday 09 April 2026 06:11:52 +0000 (0:00:01.196) 1:00:54.264 ******** 2026-04-09 06:12:16.408471 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:12:16.408481 | orchestrator | 2026-04-09 06:12:16.408492 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 06:12:16.408511 | orchestrator | Thursday 09 April 2026 06:11:53 +0000 (0:00:01.178) 1:00:55.442 ******** 2026-04-09 06:12:16.408523 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:12:16.408533 | orchestrator | 2026-04-09 06:12:16.408544 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 06:12:16.408555 | orchestrator | Thursday 09 April 2026 06:11:54 +0000 (0:00:01.229) 1:00:56.672 ******** 2026-04-09 06:12:16.408566 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:12:16.408577 | orchestrator | 2026-04-09 06:12:16.408594 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 06:12:16.408606 | orchestrator | Thursday 09 April 2026 06:11:56 +0000 (0:00:01.305) 1:00:57.977 ******** 2026-04-09 06:12:16.408617 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:12:16.408628 | orchestrator | 2026-04-09 06:12:16.408638 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-09 06:12:16.408649 | orchestrator | Thursday 09 April 2026 06:11:57 +0000 (0:00:01.226) 1:00:59.204 ******** 2026-04-09 06:12:16.408660 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:12:16.408671 | orchestrator | 2026-04-09 06:12:16.408682 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-09 06:12:16.408693 | orchestrator | Thursday 09 April 2026 06:11:58 +0000 (0:00:01.102) 1:01:00.307 ******** 2026-04-09 06:12:16.408704 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:12:16.408715 | orchestrator | 2026-04-09 06:12:16.408725 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-09 06:12:16.408736 | orchestrator | Thursday 09 April 2026 06:11:59 +0000 (0:00:01.133) 1:01:01.441 ******** 2026-04-09 06:12:16.408747 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:12:16.408758 | orchestrator | 2026-04-09 06:12:16.408769 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-09 06:12:16.408780 | orchestrator | Thursday 09 April 2026 06:12:00 +0000 (0:00:01.123) 1:01:02.564 ******** 2026-04-09 06:12:16.408872 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:12:16.408884 | orchestrator | 2026-04-09 06:12:16.408896 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-09 06:12:16.408906 | orchestrator | Thursday 09 April 2026 06:12:01 +0000 (0:00:01.111) 1:01:03.676 ******** 2026-04-09 06:12:16.408917 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:12:16.408928 | orchestrator | 2026-04-09 06:12:16.408939 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-09 06:12:16.408950 | orchestrator | Thursday 09 April 2026 06:12:02 +0000 (0:00:01.108) 1:01:04.785 ******** 2026-04-09 06:12:16.408961 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:12:16.408971 | orchestrator | 2026-04-09 06:12:16.408982 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-09 06:12:16.408993 | orchestrator | Thursday 09 April 2026 06:12:04 +0000 (0:00:01.150) 1:01:05.935 ******** 2026-04-09 06:12:16.409004 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:12:16.409015 | orchestrator | 2026-04-09 06:12:16.409026 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-09 06:12:16.409038 | orchestrator | Thursday 09 April 2026 06:12:05 +0000 (0:00:01.169) 1:01:07.105 ******** 2026-04-09 06:12:16.409049 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:12:16.409060 | orchestrator | 2026-04-09 06:12:16.409071 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-09 06:12:16.409082 | orchestrator | Thursday 09 April 2026 06:12:06 +0000 (0:00:01.159) 1:01:08.264 ******** 2026-04-09 06:12:16.409093 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:12:16.409104 | orchestrator | 2026-04-09 06:12:16.409114 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-09 06:12:16.409125 | orchestrator | Thursday 09 April 2026 06:12:07 +0000 (0:00:01.127) 1:01:09.392 ******** 2026-04-09 06:12:16.409136 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:12:16.409147 | orchestrator | 2026-04-09 06:12:16.409158 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-09 06:12:16.409177 | orchestrator | Thursday 09 April 2026 06:12:08 +0000 (0:00:01.135) 1:01:10.528 ******** 2026-04-09 06:12:16.409188 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:12:16.409199 | orchestrator | 2026-04-09 06:12:16.409210 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-09 06:12:16.409220 | orchestrator | Thursday 09 April 2026 06:12:09 +0000 (0:00:01.191) 1:01:11.719 ******** 2026-04-09 06:12:16.409231 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:12:16.409242 | orchestrator | 2026-04-09 06:12:16.409253 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-09 06:12:16.409264 | orchestrator | Thursday 09 April 2026 06:12:11 +0000 (0:00:01.200) 1:01:12.920 ******** 2026-04-09 06:12:16.409275 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:12:16.409285 | orchestrator | 2026-04-09 06:12:16.409296 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-09 06:12:16.409307 | orchestrator | Thursday 09 April 2026 06:12:13 +0000 (0:00:01.972) 1:01:14.892 ******** 2026-04-09 06:12:16.409317 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:12:16.409328 | orchestrator | 2026-04-09 06:12:16.409339 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-09 06:12:16.409350 | orchestrator | Thursday 09 April 2026 06:12:15 +0000 (0:00:02.238) 1:01:17.131 ******** 2026-04-09 06:12:16.409361 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-04-09 06:12:16.409372 | orchestrator | 2026-04-09 06:12:16.409383 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-09 06:12:16.409402 | orchestrator | Thursday 09 April 2026 06:12:16 +0000 (0:00:01.133) 1:01:18.265 ******** 2026-04-09 06:13:03.499333 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:03.499452 | orchestrator | 2026-04-09 06:13:03.499469 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-09 06:13:03.499483 | orchestrator | Thursday 09 April 2026 06:12:17 +0000 (0:00:01.214) 1:01:19.479 ******** 2026-04-09 06:13:03.499494 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:03.499506 | orchestrator | 2026-04-09 06:13:03.499518 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-09 06:13:03.499529 | orchestrator | Thursday 09 April 2026 06:12:18 +0000 (0:00:01.127) 1:01:20.607 ******** 2026-04-09 06:13:03.499540 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 06:13:03.499551 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 06:13:03.499562 | orchestrator | 2026-04-09 06:13:03.499574 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-09 06:13:03.499601 | orchestrator | Thursday 09 April 2026 06:12:20 +0000 (0:00:01.809) 1:01:22.416 ******** 2026-04-09 06:13:03.499612 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:13:03.499624 | orchestrator | 2026-04-09 06:13:03.499635 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-09 06:13:03.499646 | orchestrator | Thursday 09 April 2026 06:12:21 +0000 (0:00:01.441) 1:01:23.858 ******** 2026-04-09 06:13:03.499657 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:03.499668 | orchestrator | 2026-04-09 06:13:03.499679 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-09 06:13:03.499690 | orchestrator | Thursday 09 April 2026 06:12:23 +0000 (0:00:01.237) 1:01:25.096 ******** 2026-04-09 06:13:03.499701 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:03.499712 | orchestrator | 2026-04-09 06:13:03.499723 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-09 06:13:03.499734 | orchestrator | Thursday 09 April 2026 06:12:24 +0000 (0:00:01.161) 1:01:26.257 ******** 2026-04-09 06:13:03.499745 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:03.499756 | orchestrator | 2026-04-09 06:13:03.499767 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-09 06:13:03.499799 | orchestrator | Thursday 09 April 2026 06:12:25 +0000 (0:00:01.160) 1:01:27.418 ******** 2026-04-09 06:13:03.499811 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-04-09 06:13:03.499823 | orchestrator | 2026-04-09 06:13:03.499833 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-09 06:13:03.499868 | orchestrator | Thursday 09 April 2026 06:12:26 +0000 (0:00:01.109) 1:01:28.528 ******** 2026-04-09 06:13:03.499882 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:13:03.499895 | orchestrator | 2026-04-09 06:13:03.499909 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-09 06:13:03.499923 | orchestrator | Thursday 09 April 2026 06:12:28 +0000 (0:00:01.910) 1:01:30.438 ******** 2026-04-09 06:13:03.499936 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 06:13:03.499950 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 06:13:03.499963 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 06:13:03.499975 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:03.499988 | orchestrator | 2026-04-09 06:13:03.500001 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-09 06:13:03.500015 | orchestrator | Thursday 09 April 2026 06:12:29 +0000 (0:00:01.162) 1:01:31.601 ******** 2026-04-09 06:13:03.500028 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:03.500040 | orchestrator | 2026-04-09 06:13:03.500054 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-09 06:13:03.500067 | orchestrator | Thursday 09 April 2026 06:12:30 +0000 (0:00:01.154) 1:01:32.755 ******** 2026-04-09 06:13:03.500080 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:03.500093 | orchestrator | 2026-04-09 06:13:03.500106 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-09 06:13:03.500119 | orchestrator | Thursday 09 April 2026 06:12:32 +0000 (0:00:01.156) 1:01:33.911 ******** 2026-04-09 06:13:03.500132 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:03.500145 | orchestrator | 2026-04-09 06:13:03.500158 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-09 06:13:03.500171 | orchestrator | Thursday 09 April 2026 06:12:33 +0000 (0:00:01.191) 1:01:35.103 ******** 2026-04-09 06:13:03.500184 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:03.500197 | orchestrator | 2026-04-09 06:13:03.500212 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-09 06:13:03.500223 | orchestrator | Thursday 09 April 2026 06:12:34 +0000 (0:00:01.149) 1:01:36.252 ******** 2026-04-09 06:13:03.500234 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:03.500245 | orchestrator | 2026-04-09 06:13:03.500256 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-09 06:13:03.500267 | orchestrator | Thursday 09 April 2026 06:12:35 +0000 (0:00:01.179) 1:01:37.432 ******** 2026-04-09 06:13:03.500278 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:13:03.500289 | orchestrator | 2026-04-09 06:13:03.500300 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-09 06:13:03.500311 | orchestrator | Thursday 09 April 2026 06:12:37 +0000 (0:00:02.393) 1:01:39.825 ******** 2026-04-09 06:13:03.500322 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:13:03.500333 | orchestrator | 2026-04-09 06:13:03.500344 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-09 06:13:03.500355 | orchestrator | Thursday 09 April 2026 06:12:39 +0000 (0:00:01.152) 1:01:40.977 ******** 2026-04-09 06:13:03.500366 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-04-09 06:13:03.500377 | orchestrator | 2026-04-09 06:13:03.500388 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-09 06:13:03.500415 | orchestrator | Thursday 09 April 2026 06:12:40 +0000 (0:00:01.369) 1:01:42.347 ******** 2026-04-09 06:13:03.500427 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:03.500446 | orchestrator | 2026-04-09 06:13:03.500457 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-09 06:13:03.500468 | orchestrator | Thursday 09 April 2026 06:12:41 +0000 (0:00:01.179) 1:01:43.527 ******** 2026-04-09 06:13:03.500480 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:03.500491 | orchestrator | 2026-04-09 06:13:03.500502 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-09 06:13:03.500513 | orchestrator | Thursday 09 April 2026 06:12:42 +0000 (0:00:01.144) 1:01:44.671 ******** 2026-04-09 06:13:03.500524 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:03.500535 | orchestrator | 2026-04-09 06:13:03.500546 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-09 06:13:03.500557 | orchestrator | Thursday 09 April 2026 06:12:43 +0000 (0:00:01.161) 1:01:45.833 ******** 2026-04-09 06:13:03.500568 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:03.500579 | orchestrator | 2026-04-09 06:13:03.500595 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-09 06:13:03.500607 | orchestrator | Thursday 09 April 2026 06:12:45 +0000 (0:00:01.171) 1:01:47.004 ******** 2026-04-09 06:13:03.500618 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:03.500629 | orchestrator | 2026-04-09 06:13:03.500640 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-09 06:13:03.500651 | orchestrator | Thursday 09 April 2026 06:12:46 +0000 (0:00:01.161) 1:01:48.165 ******** 2026-04-09 06:13:03.500662 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:03.500673 | orchestrator | 2026-04-09 06:13:03.500691 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-09 06:13:03.500710 | orchestrator | Thursday 09 April 2026 06:12:47 +0000 (0:00:01.167) 1:01:49.333 ******** 2026-04-09 06:13:03.500729 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:03.500747 | orchestrator | 2026-04-09 06:13:03.500765 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-09 06:13:03.500783 | orchestrator | Thursday 09 April 2026 06:12:48 +0000 (0:00:01.158) 1:01:50.492 ******** 2026-04-09 06:13:03.500801 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:03.500818 | orchestrator | 2026-04-09 06:13:03.500837 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-09 06:13:03.500878 | orchestrator | Thursday 09 April 2026 06:12:49 +0000 (0:00:01.187) 1:01:51.680 ******** 2026-04-09 06:13:03.500896 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:13:03.500915 | orchestrator | 2026-04-09 06:13:03.500934 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-09 06:13:03.500952 | orchestrator | Thursday 09 April 2026 06:12:50 +0000 (0:00:01.165) 1:01:52.845 ******** 2026-04-09 06:13:03.500968 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-04-09 06:13:03.500980 | orchestrator | 2026-04-09 06:13:03.500991 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-09 06:13:03.501002 | orchestrator | Thursday 09 April 2026 06:12:52 +0000 (0:00:01.158) 1:01:54.004 ******** 2026-04-09 06:13:03.501013 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-04-09 06:13:03.501025 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-09 06:13:03.501036 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-09 06:13:03.501047 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-09 06:13:03.501058 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-09 06:13:03.501070 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-09 06:13:03.501081 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-09 06:13:03.501091 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-09 06:13:03.501103 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 06:13:03.501114 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 06:13:03.501125 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 06:13:03.501145 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 06:13:03.501156 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 06:13:03.501167 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 06:13:03.501178 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-04-09 06:13:03.501189 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-04-09 06:13:03.501200 | orchestrator | 2026-04-09 06:13:03.501211 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-09 06:13:03.501222 | orchestrator | Thursday 09 April 2026 06:12:58 +0000 (0:00:06.509) 1:02:00.514 ******** 2026-04-09 06:13:03.501233 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-04-09 06:13:03.501244 | orchestrator | 2026-04-09 06:13:03.501255 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-09 06:13:03.501266 | orchestrator | Thursday 09 April 2026 06:12:59 +0000 (0:00:01.342) 1:02:01.857 ******** 2026-04-09 06:13:03.501277 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 06:13:03.501290 | orchestrator | 2026-04-09 06:13:03.501301 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-09 06:13:03.501312 | orchestrator | Thursday 09 April 2026 06:13:01 +0000 (0:00:01.513) 1:02:03.370 ******** 2026-04-09 06:13:03.501322 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 06:13:03.501334 | orchestrator | 2026-04-09 06:13:03.501344 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-09 06:13:03.501365 | orchestrator | Thursday 09 April 2026 06:13:03 +0000 (0:00:01.987) 1:02:05.358 ******** 2026-04-09 06:13:53.858791 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:53.859003 | orchestrator | 2026-04-09 06:13:53.859023 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-09 06:13:53.859037 | orchestrator | Thursday 09 April 2026 06:13:04 +0000 (0:00:01.125) 1:02:06.483 ******** 2026-04-09 06:13:53.859048 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:53.859060 | orchestrator | 2026-04-09 06:13:53.859071 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-09 06:13:53.859082 | orchestrator | Thursday 09 April 2026 06:13:05 +0000 (0:00:01.163) 1:02:07.647 ******** 2026-04-09 06:13:53.859094 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:53.859105 | orchestrator | 2026-04-09 06:13:53.859116 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-09 06:13:53.859128 | orchestrator | Thursday 09 April 2026 06:13:06 +0000 (0:00:01.117) 1:02:08.765 ******** 2026-04-09 06:13:53.859138 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:53.859149 | orchestrator | 2026-04-09 06:13:53.859180 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-09 06:13:53.859192 | orchestrator | Thursday 09 April 2026 06:13:08 +0000 (0:00:01.118) 1:02:09.884 ******** 2026-04-09 06:13:53.859203 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:53.859213 | orchestrator | 2026-04-09 06:13:53.859225 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-09 06:13:53.859237 | orchestrator | Thursday 09 April 2026 06:13:09 +0000 (0:00:01.146) 1:02:11.030 ******** 2026-04-09 06:13:53.859248 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:53.859259 | orchestrator | 2026-04-09 06:13:53.859270 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-09 06:13:53.859281 | orchestrator | Thursday 09 April 2026 06:13:10 +0000 (0:00:01.121) 1:02:12.151 ******** 2026-04-09 06:13:53.859294 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:53.859309 | orchestrator | 2026-04-09 06:13:53.859323 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-09 06:13:53.859362 | orchestrator | Thursday 09 April 2026 06:13:11 +0000 (0:00:01.142) 1:02:13.294 ******** 2026-04-09 06:13:53.859376 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:53.859388 | orchestrator | 2026-04-09 06:13:53.859402 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-09 06:13:53.859414 | orchestrator | Thursday 09 April 2026 06:13:12 +0000 (0:00:01.162) 1:02:14.457 ******** 2026-04-09 06:13:53.859426 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:53.859439 | orchestrator | 2026-04-09 06:13:53.859452 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-09 06:13:53.859465 | orchestrator | Thursday 09 April 2026 06:13:13 +0000 (0:00:01.176) 1:02:15.633 ******** 2026-04-09 06:13:53.859477 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:53.859490 | orchestrator | 2026-04-09 06:13:53.859503 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-09 06:13:53.859516 | orchestrator | Thursday 09 April 2026 06:13:14 +0000 (0:00:01.151) 1:02:16.785 ******** 2026-04-09 06:13:53.859528 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:53.859541 | orchestrator | 2026-04-09 06:13:53.859553 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-09 06:13:53.859566 | orchestrator | Thursday 09 April 2026 06:13:16 +0000 (0:00:01.131) 1:02:17.917 ******** 2026-04-09 06:13:53.859579 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-09 06:13:53.859591 | orchestrator | 2026-04-09 06:13:53.859604 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-09 06:13:53.859616 | orchestrator | Thursday 09 April 2026 06:13:20 +0000 (0:00:04.376) 1:02:22.294 ******** 2026-04-09 06:13:53.859630 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 06:13:53.859644 | orchestrator | 2026-04-09 06:13:53.859657 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-09 06:13:53.859669 | orchestrator | Thursday 09 April 2026 06:13:21 +0000 (0:00:01.158) 1:02:23.452 ******** 2026-04-09 06:13:53.859683 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-04-09 06:13:53.859698 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-04-09 06:13:53.859711 | orchestrator | 2026-04-09 06:13:53.859722 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-09 06:13:53.859733 | orchestrator | Thursday 09 April 2026 06:13:26 +0000 (0:00:04.751) 1:02:28.204 ******** 2026-04-09 06:13:53.859744 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:53.859755 | orchestrator | 2026-04-09 06:13:53.859767 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-09 06:13:53.859777 | orchestrator | Thursday 09 April 2026 06:13:27 +0000 (0:00:01.137) 1:02:29.342 ******** 2026-04-09 06:13:53.859788 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:53.859799 | orchestrator | 2026-04-09 06:13:53.859810 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 06:13:53.859842 | orchestrator | Thursday 09 April 2026 06:13:28 +0000 (0:00:01.103) 1:02:30.446 ******** 2026-04-09 06:13:53.859876 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:53.859888 | orchestrator | 2026-04-09 06:13:53.859899 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 06:13:53.859921 | orchestrator | Thursday 09 April 2026 06:13:29 +0000 (0:00:01.194) 1:02:31.641 ******** 2026-04-09 06:13:53.859932 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:53.859943 | orchestrator | 2026-04-09 06:13:53.859954 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 06:13:53.859965 | orchestrator | Thursday 09 April 2026 06:13:30 +0000 (0:00:01.136) 1:02:32.778 ******** 2026-04-09 06:13:53.859976 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:53.859987 | orchestrator | 2026-04-09 06:13:53.859998 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 06:13:53.860009 | orchestrator | Thursday 09 April 2026 06:13:32 +0000 (0:00:01.186) 1:02:33.964 ******** 2026-04-09 06:13:53.860020 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:13:53.860032 | orchestrator | 2026-04-09 06:13:53.860049 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 06:13:53.860061 | orchestrator | Thursday 09 April 2026 06:13:33 +0000 (0:00:01.281) 1:02:35.245 ******** 2026-04-09 06:13:53.860072 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 06:13:53.860084 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 06:13:53.860095 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 06:13:53.860106 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:53.860117 | orchestrator | 2026-04-09 06:13:53.860128 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 06:13:53.860139 | orchestrator | Thursday 09 April 2026 06:13:35 +0000 (0:00:01.863) 1:02:37.109 ******** 2026-04-09 06:13:53.860150 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 06:13:53.860161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 06:13:53.860172 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 06:13:53.860183 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:53.860194 | orchestrator | 2026-04-09 06:13:53.860205 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 06:13:53.860216 | orchestrator | Thursday 09 April 2026 06:13:37 +0000 (0:00:01.812) 1:02:38.922 ******** 2026-04-09 06:13:53.860227 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 06:13:53.860238 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 06:13:53.860249 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 06:13:53.860260 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:53.860271 | orchestrator | 2026-04-09 06:13:53.860282 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 06:13:53.860293 | orchestrator | Thursday 09 April 2026 06:13:38 +0000 (0:00:01.854) 1:02:40.776 ******** 2026-04-09 06:13:53.860304 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:13:53.860315 | orchestrator | 2026-04-09 06:13:53.860326 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 06:13:53.860337 | orchestrator | Thursday 09 April 2026 06:13:40 +0000 (0:00:01.158) 1:02:41.935 ******** 2026-04-09 06:13:53.860348 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-09 06:13:53.860359 | orchestrator | 2026-04-09 06:13:53.860370 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-09 06:13:53.860381 | orchestrator | Thursday 09 April 2026 06:13:41 +0000 (0:00:01.550) 1:02:43.485 ******** 2026-04-09 06:13:53.860392 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:13:53.860403 | orchestrator | 2026-04-09 06:13:53.860414 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-09 06:13:53.860425 | orchestrator | Thursday 09 April 2026 06:13:43 +0000 (0:00:01.817) 1:02:45.303 ******** 2026-04-09 06:13:53.860436 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3 2026-04-09 06:13:53.860447 | orchestrator | 2026-04-09 06:13:53.860458 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-09 06:13:53.860476 | orchestrator | Thursday 09 April 2026 06:13:44 +0000 (0:00:01.528) 1:02:46.831 ******** 2026-04-09 06:13:53.860487 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 06:13:53.860499 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 06:13:53.860510 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 06:13:53.860521 | orchestrator | 2026-04-09 06:13:53.860532 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-09 06:13:53.860543 | orchestrator | Thursday 09 April 2026 06:13:48 +0000 (0:00:03.258) 1:02:50.090 ******** 2026-04-09 06:13:53.860554 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-09 06:13:53.860565 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 06:13:53.860576 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:13:53.860587 | orchestrator | 2026-04-09 06:13:53.860598 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-09 06:13:53.860609 | orchestrator | Thursday 09 April 2026 06:13:50 +0000 (0:00:01.984) 1:02:52.074 ******** 2026-04-09 06:13:53.860620 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:13:53.860631 | orchestrator | 2026-04-09 06:13:53.860642 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-09 06:13:53.860653 | orchestrator | Thursday 09 April 2026 06:13:51 +0000 (0:00:01.147) 1:02:53.222 ******** 2026-04-09 06:13:53.860664 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3 2026-04-09 06:13:53.860676 | orchestrator | 2026-04-09 06:13:53.860687 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-09 06:13:53.860698 | orchestrator | Thursday 09 April 2026 06:13:52 +0000 (0:00:01.522) 1:02:54.744 ******** 2026-04-09 06:13:53.860716 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 06:15:11.495246 | orchestrator | 2026-04-09 06:15:11.495350 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-09 06:15:11.495369 | orchestrator | Thursday 09 April 2026 06:13:54 +0000 (0:00:02.093) 1:02:56.837 ******** 2026-04-09 06:15:11.495381 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 06:15:11.495393 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-09 06:15:11.495405 | orchestrator | 2026-04-09 06:15:11.495416 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-09 06:15:11.495427 | orchestrator | Thursday 09 April 2026 06:14:00 +0000 (0:00:05.500) 1:03:02.338 ******** 2026-04-09 06:15:11.495438 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 06:15:11.495473 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 06:15:11.495485 | orchestrator | 2026-04-09 06:15:11.495496 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-09 06:15:11.495507 | orchestrator | Thursday 09 April 2026 06:14:03 +0000 (0:00:03.184) 1:03:05.522 ******** 2026-04-09 06:15:11.495518 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-09 06:15:11.495530 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:15:11.495541 | orchestrator | 2026-04-09 06:15:11.495553 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-09 06:15:11.495563 | orchestrator | Thursday 09 April 2026 06:14:05 +0000 (0:00:01.978) 1:03:07.501 ******** 2026-04-09 06:15:11.495574 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-09 06:15:11.495585 | orchestrator | 2026-04-09 06:15:11.495596 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-09 06:15:11.495607 | orchestrator | Thursday 09 April 2026 06:14:07 +0000 (0:00:01.515) 1:03:09.017 ******** 2026-04-09 06:15:11.495618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:15:11.495712 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:15:11.495729 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:15:11.495740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:15:11.495751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:15:11.495762 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:15:11.495773 | orchestrator | 2026-04-09 06:15:11.495784 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-09 06:15:11.495795 | orchestrator | Thursday 09 April 2026 06:14:08 +0000 (0:00:01.595) 1:03:10.612 ******** 2026-04-09 06:15:11.495806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:15:11.495817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:15:11.495828 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:15:11.495839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:15:11.495850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:15:11.495861 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:15:11.495872 | orchestrator | 2026-04-09 06:15:11.495920 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-09 06:15:11.495932 | orchestrator | Thursday 09 April 2026 06:14:10 +0000 (0:00:01.600) 1:03:12.213 ******** 2026-04-09 06:15:11.495943 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 06:15:11.495955 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 06:15:11.495966 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 06:15:11.495983 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 06:15:11.496003 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 06:15:11.496019 | orchestrator | 2026-04-09 06:15:11.496037 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-09 06:15:11.496078 | orchestrator | Thursday 09 April 2026 06:14:44 +0000 (0:00:33.750) 1:03:45.964 ******** 2026-04-09 06:15:11.496101 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:15:11.496120 | orchestrator | 2026-04-09 06:15:11.496136 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-09 06:15:11.496148 | orchestrator | Thursday 09 April 2026 06:14:45 +0000 (0:00:01.122) 1:03:47.086 ******** 2026-04-09 06:15:11.496159 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:15:11.496170 | orchestrator | 2026-04-09 06:15:11.496181 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-09 06:15:11.496192 | orchestrator | Thursday 09 April 2026 06:14:46 +0000 (0:00:01.115) 1:03:48.202 ******** 2026-04-09 06:15:11.496203 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3 2026-04-09 06:15:11.496224 | orchestrator | 2026-04-09 06:15:11.496235 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-09 06:15:11.496253 | orchestrator | Thursday 09 April 2026 06:14:47 +0000 (0:00:01.487) 1:03:49.690 ******** 2026-04-09 06:15:11.496264 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3 2026-04-09 06:15:11.496275 | orchestrator | 2026-04-09 06:15:11.496285 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-09 06:15:11.496296 | orchestrator | Thursday 09 April 2026 06:14:49 +0000 (0:00:01.604) 1:03:51.294 ******** 2026-04-09 06:15:11.496307 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:15:11.496318 | orchestrator | 2026-04-09 06:15:11.496328 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-09 06:15:11.496339 | orchestrator | Thursday 09 April 2026 06:14:51 +0000 (0:00:02.078) 1:03:53.373 ******** 2026-04-09 06:15:11.496350 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:15:11.496360 | orchestrator | 2026-04-09 06:15:11.496371 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-09 06:15:11.496387 | orchestrator | Thursday 09 April 2026 06:14:53 +0000 (0:00:01.940) 1:03:55.314 ******** 2026-04-09 06:15:11.496408 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:15:11.496435 | orchestrator | 2026-04-09 06:15:11.496454 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-09 06:15:11.496472 | orchestrator | Thursday 09 April 2026 06:14:55 +0000 (0:00:02.334) 1:03:57.648 ******** 2026-04-09 06:15:11.496489 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 06:15:11.496509 | orchestrator | 2026-04-09 06:15:11.496528 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-04-09 06:15:11.496547 | orchestrator | 2026-04-09 06:15:11.496566 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 06:15:11.496584 | orchestrator | Thursday 09 April 2026 06:14:58 +0000 (0:00:02.942) 1:04:00.591 ******** 2026-04-09 06:15:11.496596 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-04-09 06:15:11.496607 | orchestrator | 2026-04-09 06:15:11.496618 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-09 06:15:11.496628 | orchestrator | Thursday 09 April 2026 06:14:59 +0000 (0:00:01.093) 1:04:01.685 ******** 2026-04-09 06:15:11.496639 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:15:11.496650 | orchestrator | 2026-04-09 06:15:11.496661 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-09 06:15:11.496672 | orchestrator | Thursday 09 April 2026 06:15:01 +0000 (0:00:01.470) 1:04:03.155 ******** 2026-04-09 06:15:11.496683 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:15:11.496693 | orchestrator | 2026-04-09 06:15:11.496704 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 06:15:11.496715 | orchestrator | Thursday 09 April 2026 06:15:02 +0000 (0:00:01.215) 1:04:04.371 ******** 2026-04-09 06:15:11.496725 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:15:11.496736 | orchestrator | 2026-04-09 06:15:11.496747 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 06:15:11.496757 | orchestrator | Thursday 09 April 2026 06:15:03 +0000 (0:00:01.460) 1:04:05.831 ******** 2026-04-09 06:15:11.496768 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:15:11.496778 | orchestrator | 2026-04-09 06:15:11.496789 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-09 06:15:11.496800 | orchestrator | Thursday 09 April 2026 06:15:05 +0000 (0:00:01.193) 1:04:07.025 ******** 2026-04-09 06:15:11.496811 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:15:11.496822 | orchestrator | 2026-04-09 06:15:11.496832 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-09 06:15:11.496843 | orchestrator | Thursday 09 April 2026 06:15:06 +0000 (0:00:01.171) 1:04:08.197 ******** 2026-04-09 06:15:11.496863 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:15:11.496897 | orchestrator | 2026-04-09 06:15:11.496910 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-09 06:15:11.496921 | orchestrator | Thursday 09 April 2026 06:15:07 +0000 (0:00:01.141) 1:04:09.338 ******** 2026-04-09 06:15:11.496932 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:15:11.496943 | orchestrator | 2026-04-09 06:15:11.496954 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-09 06:15:11.496965 | orchestrator | Thursday 09 April 2026 06:15:08 +0000 (0:00:01.214) 1:04:10.553 ******** 2026-04-09 06:15:11.496975 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:15:11.496986 | orchestrator | 2026-04-09 06:15:11.496997 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-09 06:15:11.497007 | orchestrator | Thursday 09 April 2026 06:15:09 +0000 (0:00:01.092) 1:04:11.646 ******** 2026-04-09 06:15:11.497018 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 06:15:11.497030 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 06:15:11.497049 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 06:15:11.497067 | orchestrator | 2026-04-09 06:15:11.497085 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-09 06:15:11.497117 | orchestrator | Thursday 09 April 2026 06:15:11 +0000 (0:00:01.708) 1:04:13.354 ******** 2026-04-09 06:15:37.361731 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:15:37.361843 | orchestrator | 2026-04-09 06:15:37.361859 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-09 06:15:37.361870 | orchestrator | Thursday 09 April 2026 06:15:12 +0000 (0:00:01.231) 1:04:14.586 ******** 2026-04-09 06:15:37.361880 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 06:15:37.361944 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 06:15:37.361955 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 06:15:37.361964 | orchestrator | 2026-04-09 06:15:37.361974 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-09 06:15:37.361999 | orchestrator | Thursday 09 April 2026 06:15:15 +0000 (0:00:03.032) 1:04:17.618 ******** 2026-04-09 06:15:37.362009 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-09 06:15:37.362072 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-09 06:15:37.362083 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-09 06:15:37.362093 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:15:37.362103 | orchestrator | 2026-04-09 06:15:37.362114 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-09 06:15:37.362124 | orchestrator | Thursday 09 April 2026 06:15:17 +0000 (0:00:01.444) 1:04:19.062 ******** 2026-04-09 06:15:37.362136 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 06:15:37.362149 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 06:15:37.362159 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 06:15:37.362169 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:15:37.362179 | orchestrator | 2026-04-09 06:15:37.362188 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-09 06:15:37.362220 | orchestrator | Thursday 09 April 2026 06:15:19 +0000 (0:00:01.989) 1:04:21.052 ******** 2026-04-09 06:15:37.362232 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 06:15:37.362245 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 06:15:37.362255 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 06:15:37.362265 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:15:37.362275 | orchestrator | 2026-04-09 06:15:37.362287 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-09 06:15:37.362299 | orchestrator | Thursday 09 April 2026 06:15:20 +0000 (0:00:01.194) 1:04:22.246 ******** 2026-04-09 06:15:37.362328 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '69d38aa54653', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-09 06:15:13.286718', 'end': '2026-04-09 06:15:13.342967', 'delta': '0:00:00.056249', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['69d38aa54653'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-09 06:15:37.362348 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '3e7867c40460', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-09 06:15:13.878569', 'end': '2026-04-09 06:15:13.929102', 'delta': '0:00:00.050533', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3e7867c40460'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-09 06:15:37.362361 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '5ed6058fb18c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-09 06:15:14.435567', 'end': '2026-04-09 06:15:14.483209', 'delta': '0:00:00.047642', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5ed6058fb18c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-09 06:15:37.362380 | orchestrator | 2026-04-09 06:15:37.362391 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-09 06:15:37.362402 | orchestrator | Thursday 09 April 2026 06:15:21 +0000 (0:00:01.238) 1:04:23.485 ******** 2026-04-09 06:15:37.362413 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:15:37.362425 | orchestrator | 2026-04-09 06:15:37.362437 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-09 06:15:37.362448 | orchestrator | Thursday 09 April 2026 06:15:22 +0000 (0:00:01.265) 1:04:24.750 ******** 2026-04-09 06:15:37.362459 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:15:37.362471 | orchestrator | 2026-04-09 06:15:37.362482 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-09 06:15:37.362493 | orchestrator | Thursday 09 April 2026 06:15:24 +0000 (0:00:01.718) 1:04:26.469 ******** 2026-04-09 06:15:37.362504 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:15:37.362517 | orchestrator | 2026-04-09 06:15:37.362528 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-09 06:15:37.362539 | orchestrator | Thursday 09 April 2026 06:15:25 +0000 (0:00:01.235) 1:04:27.704 ******** 2026-04-09 06:15:37.362551 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-09 06:15:37.362562 | orchestrator | 2026-04-09 06:15:37.362574 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 06:15:37.362585 | orchestrator | Thursday 09 April 2026 06:15:27 +0000 (0:00:02.079) 1:04:29.784 ******** 2026-04-09 06:15:37.362595 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:15:37.362604 | orchestrator | 2026-04-09 06:15:37.362614 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-09 06:15:37.362624 | orchestrator | Thursday 09 April 2026 06:15:29 +0000 (0:00:01.135) 1:04:30.920 ******** 2026-04-09 06:15:37.362634 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:15:37.362643 | orchestrator | 2026-04-09 06:15:37.362653 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-09 06:15:37.362663 | orchestrator | Thursday 09 April 2026 06:15:30 +0000 (0:00:01.120) 1:04:32.041 ******** 2026-04-09 06:15:37.362673 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:15:37.362682 | orchestrator | 2026-04-09 06:15:37.362692 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 06:15:37.362702 | orchestrator | Thursday 09 April 2026 06:15:31 +0000 (0:00:01.233) 1:04:33.274 ******** 2026-04-09 06:15:37.362711 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:15:37.362721 | orchestrator | 2026-04-09 06:15:37.362731 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-09 06:15:37.362741 | orchestrator | Thursday 09 April 2026 06:15:32 +0000 (0:00:01.230) 1:04:34.504 ******** 2026-04-09 06:15:37.362750 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:15:37.362760 | orchestrator | 2026-04-09 06:15:37.362770 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-09 06:15:37.362779 | orchestrator | Thursday 09 April 2026 06:15:33 +0000 (0:00:01.124) 1:04:35.629 ******** 2026-04-09 06:15:37.362789 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:15:37.362799 | orchestrator | 2026-04-09 06:15:37.362808 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-09 06:15:37.362818 | orchestrator | Thursday 09 April 2026 06:15:34 +0000 (0:00:01.206) 1:04:36.835 ******** 2026-04-09 06:15:37.362827 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:15:37.362837 | orchestrator | 2026-04-09 06:15:37.362847 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-09 06:15:37.362857 | orchestrator | Thursday 09 April 2026 06:15:36 +0000 (0:00:01.187) 1:04:38.022 ******** 2026-04-09 06:15:37.362866 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:15:37.362876 | orchestrator | 2026-04-09 06:15:37.362913 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-09 06:15:37.362930 | orchestrator | Thursday 09 April 2026 06:15:37 +0000 (0:00:01.199) 1:04:39.222 ******** 2026-04-09 06:15:39.882457 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:15:39.882560 | orchestrator | 2026-04-09 06:15:39.882576 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-09 06:15:39.882588 | orchestrator | Thursday 09 April 2026 06:15:38 +0000 (0:00:01.111) 1:04:40.334 ******** 2026-04-09 06:15:39.882600 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:15:39.882612 | orchestrator | 2026-04-09 06:15:39.882624 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-09 06:15:39.882634 | orchestrator | Thursday 09 April 2026 06:15:39 +0000 (0:00:01.188) 1:04:41.523 ******** 2026-04-09 06:15:39.882664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:15:39.882682 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9961abb4--5e3b--57c6--b852--cf206941d3b6-osd--block--9961abb4--5e3b--57c6--b852--cf206941d3b6', 'dm-uuid-LVM-VSFf3ccEgp0wKL5jhz806Y1ipsT5ObrjT6G1Wcak0n2a6HcA1XF9dc3OSaf3QiXy'], 'uuids': ['c5a762f6-19fc-430f-b395-3c5066cc9fcd'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ecc4ee99', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['T6G1Wc-ak0n-2a6H-cA1X-F9dc-3OSa-f3QiXy']}})  2026-04-09 06:15:39.882697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105', 'scsi-SQEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '60e6f74a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 06:15:39.882710 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-HoNasn-5QxW-KcVM-fZxm-N33I-s5zp-OiEUUv', 'scsi-0QEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74', 'scsi-SQEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91609add', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--68e90870--4763--57e7--8e76--63c40a6d6d6f-osd--block--68e90870--4763--57e7--8e76--63c40a6d6d6f']}})  2026-04-09 06:15:39.882722 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:15:39.882734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:15:39.882786 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-09 06:15:39.882805 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:15:39.882818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-KSROHC-dR1c-0EeK-RnCg-2dSk-lYcb-Dl2pDV', 'dm-uuid-CRYPT-LUKS2-952a49d36c2646fe9329a26e5adefe63-KSROHC-dR1c-0EeK-RnCg-2dSk-lYcb-Dl2pDV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-09 06:15:39.882829 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:15:39.882841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--68e90870--4763--57e7--8e76--63c40a6d6d6f-osd--block--68e90870--4763--57e7--8e76--63c40a6d6d6f', 'dm-uuid-LVM-T0ejETzGFepEIpL08MD1OeW49dsnvI7wKSROHCdR1c0EeKRnCg2dSklYcbDl2pDV'], 'uuids': ['952a49d3-6c26-46fe-9329-a26e5adefe63'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '91609add', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['KSROHC-dR1c-0EeK-RnCg-2dSk-lYcb-Dl2pDV']}})  2026-04-09 06:15:39.882853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Q1Gc3L-FdZu-zuWJ-nsAv-cjJy-ZUop-F84tN5', 'scsi-0QEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf', 'scsi-SQEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ecc4ee99', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9961abb4--5e3b--57c6--b852--cf206941d3b6-osd--block--9961abb4--5e3b--57c6--b852--cf206941d3b6']}})  2026-04-09 06:15:39.882865 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:15:39.882954 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9009f97f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part16', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part14', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part15', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part1', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 06:15:41.325669 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:15:41.325771 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:15:41.325788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-T6G1Wc-ak0n-2a6H-cA1X-F9dc-3OSa-f3QiXy', 'dm-uuid-CRYPT-LUKS2-c5a762f619fc430fb3953c5066cc9fcd-T6G1Wc-ak0n-2a6H-cA1X-F9dc-3OSa-f3QiXy'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-09 06:15:41.325803 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:15:41.325816 | orchestrator | 2026-04-09 06:15:41.325828 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-09 06:15:41.325863 | orchestrator | Thursday 09 April 2026 06:15:41 +0000 (0:00:01.463) 1:04:42.986 ******** 2026-04-09 06:15:41.325876 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:15:41.325976 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9961abb4--5e3b--57c6--b852--cf206941d3b6-osd--block--9961abb4--5e3b--57c6--b852--cf206941d3b6', 'dm-uuid-LVM-VSFf3ccEgp0wKL5jhz806Y1ipsT5ObrjT6G1Wcak0n2a6HcA1XF9dc3OSaf3QiXy'], 'uuids': ['c5a762f6-19fc-430f-b395-3c5066cc9fcd'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ecc4ee99', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['T6G1Wc-ak0n-2a6H-cA1X-F9dc-3OSa-f3QiXy']}}, 'ansible_loop_var': 'item'})  2026-04-09 06:15:41.325999 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105', 'scsi-SQEMU_QEMU_HARDDISK_60e6f74a-bbf9-45c5-9e1f-c9b9c4c4d105'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '60e6f74a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:15:41.326090 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-HoNasn-5QxW-KcVM-fZxm-N33I-s5zp-OiEUUv', 'scsi-0QEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74', 'scsi-SQEMU_QEMU_HARDDISK_91609add-34c1-46d3-840a-9160ce481f74'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91609add', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--68e90870--4763--57e7--8e76--63c40a6d6d6f-osd--block--68e90870--4763--57e7--8e76--63c40a6d6d6f']}}, 'ansible_loop_var': 'item'})  2026-04-09 06:15:41.326108 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:15:41.326130 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:15:41.326142 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:15:41.326161 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:15:41.326180 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-KSROHC-dR1c-0EeK-RnCg-2dSk-lYcb-Dl2pDV', 'dm-uuid-CRYPT-LUKS2-952a49d36c2646fe9329a26e5adefe63-KSROHC-dR1c-0EeK-RnCg-2dSk-lYcb-Dl2pDV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:15:46.827954 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:15:46.828111 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--68e90870--4763--57e7--8e76--63c40a6d6d6f-osd--block--68e90870--4763--57e7--8e76--63c40a6d6d6f', 'dm-uuid-LVM-T0ejETzGFepEIpL08MD1OeW49dsnvI7wKSROHCdR1c0EeKRnCg2dSklYcbDl2pDV'], 'uuids': ['952a49d3-6c26-46fe-9329-a26e5adefe63'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '91609add', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['KSROHC-dR1c-0EeK-RnCg-2dSk-lYcb-Dl2pDV']}}, 'ansible_loop_var': 'item'})  2026-04-09 06:15:46.828149 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Q1Gc3L-FdZu-zuWJ-nsAv-cjJy-ZUop-F84tN5', 'scsi-0QEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf', 'scsi-SQEMU_QEMU_HARDDISK_ecc4ee99-00bb-43e9-af90-abdbfbfdafbf'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ecc4ee99', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9961abb4--5e3b--57c6--b852--cf206941d3b6-osd--block--9961abb4--5e3b--57c6--b852--cf206941d3b6']}}, 'ansible_loop_var': 'item'})  2026-04-09 06:15:46.828180 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:15:46.828212 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9009f97f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part16', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part14', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part15', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part1', 'scsi-SQEMU_QEMU_HARDDISK_9009f97f-5099-4efd-80df-b0fc690d20be-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:15:46.828232 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:15:46.828244 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:15:46.828259 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-T6G1Wc-ak0n-2a6H-cA1X-F9dc-3OSa-f3QiXy', 'dm-uuid-CRYPT-LUKS2-c5a762f619fc430fb3953c5066cc9fcd-T6G1Wc-ak0n-2a6H-cA1X-F9dc-3OSa-f3QiXy'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:15:46.828273 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:15:46.828285 | orchestrator | 2026-04-09 06:15:46.828296 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-09 06:15:46.828308 | orchestrator | Thursday 09 April 2026 06:15:42 +0000 (0:00:01.429) 1:04:44.416 ******** 2026-04-09 06:15:46.828318 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:15:46.828328 | orchestrator | 2026-04-09 06:15:46.828338 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-09 06:15:46.828348 | orchestrator | Thursday 09 April 2026 06:15:44 +0000 (0:00:01.525) 1:04:45.942 ******** 2026-04-09 06:15:46.828358 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:15:46.828367 | orchestrator | 2026-04-09 06:15:46.828377 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 06:15:46.828387 | orchestrator | Thursday 09 April 2026 06:15:45 +0000 (0:00:01.199) 1:04:47.142 ******** 2026-04-09 06:15:46.828397 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:15:46.828407 | orchestrator | 2026-04-09 06:15:46.828417 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 06:15:46.828433 | orchestrator | Thursday 09 April 2026 06:15:46 +0000 (0:00:01.549) 1:04:48.692 ******** 2026-04-09 06:16:29.158466 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:16:29.158604 | orchestrator | 2026-04-09 06:16:29.158621 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 06:16:29.158635 | orchestrator | Thursday 09 April 2026 06:15:47 +0000 (0:00:01.138) 1:04:49.830 ******** 2026-04-09 06:16:29.158647 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:16:29.158658 | orchestrator | 2026-04-09 06:16:29.158670 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 06:16:29.158704 | orchestrator | Thursday 09 April 2026 06:15:49 +0000 (0:00:01.250) 1:04:51.081 ******** 2026-04-09 06:16:29.158716 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:16:29.158727 | orchestrator | 2026-04-09 06:16:29.158738 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 06:16:29.158750 | orchestrator | Thursday 09 April 2026 06:15:50 +0000 (0:00:01.152) 1:04:52.234 ******** 2026-04-09 06:16:29.158761 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-09 06:16:29.158773 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-09 06:16:29.158784 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-09 06:16:29.158795 | orchestrator | 2026-04-09 06:16:29.158806 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 06:16:29.158818 | orchestrator | Thursday 09 April 2026 06:15:52 +0000 (0:00:02.033) 1:04:54.268 ******** 2026-04-09 06:16:29.158829 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-09 06:16:29.158840 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-09 06:16:29.158851 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-09 06:16:29.158862 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:16:29.158873 | orchestrator | 2026-04-09 06:16:29.158884 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-09 06:16:29.158924 | orchestrator | Thursday 09 April 2026 06:15:53 +0000 (0:00:01.183) 1:04:55.452 ******** 2026-04-09 06:16:29.158945 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-04-09 06:16:29.158959 | orchestrator | 2026-04-09 06:16:29.158971 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 06:16:29.158986 | orchestrator | Thursday 09 April 2026 06:15:54 +0000 (0:00:01.130) 1:04:56.583 ******** 2026-04-09 06:16:29.158999 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:16:29.159012 | orchestrator | 2026-04-09 06:16:29.159025 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 06:16:29.159037 | orchestrator | Thursday 09 April 2026 06:15:55 +0000 (0:00:01.144) 1:04:57.728 ******** 2026-04-09 06:16:29.159049 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:16:29.159062 | orchestrator | 2026-04-09 06:16:29.159074 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 06:16:29.159087 | orchestrator | Thursday 09 April 2026 06:15:57 +0000 (0:00:01.199) 1:04:58.927 ******** 2026-04-09 06:16:29.159099 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:16:29.159112 | orchestrator | 2026-04-09 06:16:29.159125 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 06:16:29.159137 | orchestrator | Thursday 09 April 2026 06:15:58 +0000 (0:00:01.180) 1:05:00.107 ******** 2026-04-09 06:16:29.159151 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:16:29.159163 | orchestrator | 2026-04-09 06:16:29.159175 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 06:16:29.159188 | orchestrator | Thursday 09 April 2026 06:15:59 +0000 (0:00:01.264) 1:05:01.372 ******** 2026-04-09 06:16:29.159201 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-09 06:16:29.159214 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-09 06:16:29.159227 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-09 06:16:29.159240 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:16:29.159253 | orchestrator | 2026-04-09 06:16:29.159265 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 06:16:29.159294 | orchestrator | Thursday 09 April 2026 06:16:00 +0000 (0:00:01.410) 1:05:02.782 ******** 2026-04-09 06:16:29.159307 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-09 06:16:29.159320 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-09 06:16:29.159334 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-09 06:16:29.159353 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:16:29.159364 | orchestrator | 2026-04-09 06:16:29.159375 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 06:16:29.159386 | orchestrator | Thursday 09 April 2026 06:16:02 +0000 (0:00:01.404) 1:05:04.187 ******** 2026-04-09 06:16:29.159397 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-09 06:16:29.159408 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-09 06:16:29.159419 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-09 06:16:29.159430 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:16:29.159441 | orchestrator | 2026-04-09 06:16:29.159452 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 06:16:29.159463 | orchestrator | Thursday 09 April 2026 06:16:03 +0000 (0:00:01.489) 1:05:05.677 ******** 2026-04-09 06:16:29.159474 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:16:29.159486 | orchestrator | 2026-04-09 06:16:29.159497 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 06:16:29.159508 | orchestrator | Thursday 09 April 2026 06:16:04 +0000 (0:00:01.159) 1:05:06.836 ******** 2026-04-09 06:16:29.159519 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-09 06:16:29.159530 | orchestrator | 2026-04-09 06:16:29.159541 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-09 06:16:29.159552 | orchestrator | Thursday 09 April 2026 06:16:06 +0000 (0:00:01.459) 1:05:08.295 ******** 2026-04-09 06:16:29.159581 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 06:16:29.159593 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 06:16:29.159604 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 06:16:29.159615 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 06:16:29.159626 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-09 06:16:29.159637 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 06:16:29.159648 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 06:16:29.159659 | orchestrator | 2026-04-09 06:16:29.159670 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-09 06:16:29.159681 | orchestrator | Thursday 09 April 2026 06:16:08 +0000 (0:00:02.194) 1:05:10.490 ******** 2026-04-09 06:16:29.159692 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 06:16:29.159703 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 06:16:29.159714 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 06:16:29.159725 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 06:16:29.159736 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-09 06:16:29.159747 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 06:16:29.159758 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 06:16:29.159769 | orchestrator | 2026-04-09 06:16:29.159779 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-04-09 06:16:29.159790 | orchestrator | Thursday 09 April 2026 06:16:11 +0000 (0:00:02.409) 1:05:12.899 ******** 2026-04-09 06:16:29.159801 | orchestrator | changed: [testbed-node-4] 2026-04-09 06:16:29.159812 | orchestrator | 2026-04-09 06:16:29.159823 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-04-09 06:16:29.159834 | orchestrator | Thursday 09 April 2026 06:16:12 +0000 (0:00:01.893) 1:05:14.793 ******** 2026-04-09 06:16:29.159845 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 06:16:29.159863 | orchestrator | 2026-04-09 06:16:29.159874 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-04-09 06:16:29.159885 | orchestrator | Thursday 09 April 2026 06:16:15 +0000 (0:00:02.632) 1:05:17.425 ******** 2026-04-09 06:16:29.159919 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 06:16:29.159932 | orchestrator | 2026-04-09 06:16:29.159943 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 06:16:29.159954 | orchestrator | Thursday 09 April 2026 06:16:17 +0000 (0:00:02.006) 1:05:19.432 ******** 2026-04-09 06:16:29.159964 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-04-09 06:16:29.159975 | orchestrator | 2026-04-09 06:16:29.159986 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 06:16:29.159997 | orchestrator | Thursday 09 April 2026 06:16:18 +0000 (0:00:01.184) 1:05:20.617 ******** 2026-04-09 06:16:29.160007 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-04-09 06:16:29.160018 | orchestrator | 2026-04-09 06:16:29.160029 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 06:16:29.160045 | orchestrator | Thursday 09 April 2026 06:16:19 +0000 (0:00:01.147) 1:05:21.764 ******** 2026-04-09 06:16:29.160056 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:16:29.160067 | orchestrator | 2026-04-09 06:16:29.160078 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 06:16:29.160089 | orchestrator | Thursday 09 April 2026 06:16:21 +0000 (0:00:01.166) 1:05:22.931 ******** 2026-04-09 06:16:29.160100 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:16:29.160111 | orchestrator | 2026-04-09 06:16:29.160122 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 06:16:29.160133 | orchestrator | Thursday 09 April 2026 06:16:22 +0000 (0:00:01.605) 1:05:24.536 ******** 2026-04-09 06:16:29.160143 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:16:29.160154 | orchestrator | 2026-04-09 06:16:29.160165 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 06:16:29.160176 | orchestrator | Thursday 09 April 2026 06:16:24 +0000 (0:00:01.516) 1:05:26.053 ******** 2026-04-09 06:16:29.160187 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:16:29.160198 | orchestrator | 2026-04-09 06:16:29.160209 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 06:16:29.160219 | orchestrator | Thursday 09 April 2026 06:16:25 +0000 (0:00:01.571) 1:05:27.624 ******** 2026-04-09 06:16:29.160230 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:16:29.160241 | orchestrator | 2026-04-09 06:16:29.160252 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 06:16:29.160262 | orchestrator | Thursday 09 April 2026 06:16:26 +0000 (0:00:01.123) 1:05:28.748 ******** 2026-04-09 06:16:29.160273 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:16:29.160284 | orchestrator | 2026-04-09 06:16:29.160295 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 06:16:29.160306 | orchestrator | Thursday 09 April 2026 06:16:28 +0000 (0:00:01.122) 1:05:29.871 ******** 2026-04-09 06:16:29.160317 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:16:29.160327 | orchestrator | 2026-04-09 06:16:29.160338 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 06:16:29.160360 | orchestrator | Thursday 09 April 2026 06:16:29 +0000 (0:00:01.148) 1:05:31.019 ******** 2026-04-09 06:17:08.397550 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:17:08.397686 | orchestrator | 2026-04-09 06:17:08.397705 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 06:17:08.397719 | orchestrator | Thursday 09 April 2026 06:16:30 +0000 (0:00:01.520) 1:05:32.540 ******** 2026-04-09 06:17:08.397731 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:17:08.397767 | orchestrator | 2026-04-09 06:17:08.397780 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 06:17:08.397791 | orchestrator | Thursday 09 April 2026 06:16:32 +0000 (0:00:01.544) 1:05:34.084 ******** 2026-04-09 06:17:08.397803 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:08.397815 | orchestrator | 2026-04-09 06:17:08.397827 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 06:17:08.397838 | orchestrator | Thursday 09 April 2026 06:16:33 +0000 (0:00:00.870) 1:05:34.955 ******** 2026-04-09 06:17:08.397849 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:08.397860 | orchestrator | 2026-04-09 06:17:08.397871 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 06:17:08.397882 | orchestrator | Thursday 09 April 2026 06:16:33 +0000 (0:00:00.764) 1:05:35.719 ******** 2026-04-09 06:17:08.397893 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:17:08.397904 | orchestrator | 2026-04-09 06:17:08.397949 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 06:17:08.397960 | orchestrator | Thursday 09 April 2026 06:16:34 +0000 (0:00:00.799) 1:05:36.519 ******** 2026-04-09 06:17:08.397971 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:17:08.397982 | orchestrator | 2026-04-09 06:17:08.397994 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 06:17:08.398076 | orchestrator | Thursday 09 April 2026 06:16:35 +0000 (0:00:00.804) 1:05:37.323 ******** 2026-04-09 06:17:08.398091 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:17:08.398104 | orchestrator | 2026-04-09 06:17:08.398118 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 06:17:08.398132 | orchestrator | Thursday 09 April 2026 06:16:36 +0000 (0:00:00.805) 1:05:38.129 ******** 2026-04-09 06:17:08.398145 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:08.398158 | orchestrator | 2026-04-09 06:17:08.398171 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 06:17:08.398184 | orchestrator | Thursday 09 April 2026 06:16:37 +0000 (0:00:00.778) 1:05:38.907 ******** 2026-04-09 06:17:08.398197 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:08.398210 | orchestrator | 2026-04-09 06:17:08.398222 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 06:17:08.398236 | orchestrator | Thursday 09 April 2026 06:16:37 +0000 (0:00:00.754) 1:05:39.662 ******** 2026-04-09 06:17:08.398249 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:08.398262 | orchestrator | 2026-04-09 06:17:08.398275 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 06:17:08.398288 | orchestrator | Thursday 09 April 2026 06:16:38 +0000 (0:00:00.788) 1:05:40.451 ******** 2026-04-09 06:17:08.398404 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:17:08.398418 | orchestrator | 2026-04-09 06:17:08.398430 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 06:17:08.398441 | orchestrator | Thursday 09 April 2026 06:16:39 +0000 (0:00:00.805) 1:05:41.257 ******** 2026-04-09 06:17:08.398452 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:17:08.398463 | orchestrator | 2026-04-09 06:17:08.398474 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-09 06:17:08.398485 | orchestrator | Thursday 09 April 2026 06:16:40 +0000 (0:00:00.782) 1:05:42.040 ******** 2026-04-09 06:17:08.398496 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:08.398507 | orchestrator | 2026-04-09 06:17:08.398517 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-09 06:17:08.398528 | orchestrator | Thursday 09 April 2026 06:16:41 +0000 (0:00:00.834) 1:05:42.874 ******** 2026-04-09 06:17:08.398539 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:08.398550 | orchestrator | 2026-04-09 06:17:08.398576 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-09 06:17:08.398588 | orchestrator | Thursday 09 April 2026 06:16:41 +0000 (0:00:00.792) 1:05:43.667 ******** 2026-04-09 06:17:08.398599 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:08.398620 | orchestrator | 2026-04-09 06:17:08.398631 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-09 06:17:08.398642 | orchestrator | Thursday 09 April 2026 06:16:42 +0000 (0:00:00.891) 1:05:44.558 ******** 2026-04-09 06:17:08.398653 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:08.398664 | orchestrator | 2026-04-09 06:17:08.398675 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-09 06:17:08.398686 | orchestrator | Thursday 09 April 2026 06:16:43 +0000 (0:00:00.806) 1:05:45.364 ******** 2026-04-09 06:17:08.398697 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:08.398708 | orchestrator | 2026-04-09 06:17:08.398718 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-09 06:17:08.398729 | orchestrator | Thursday 09 April 2026 06:16:44 +0000 (0:00:00.828) 1:05:46.193 ******** 2026-04-09 06:17:08.398740 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:08.398751 | orchestrator | 2026-04-09 06:17:08.398761 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-09 06:17:08.398772 | orchestrator | Thursday 09 April 2026 06:16:45 +0000 (0:00:00.772) 1:05:46.965 ******** 2026-04-09 06:17:08.398783 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:08.398794 | orchestrator | 2026-04-09 06:17:08.398805 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-09 06:17:08.398816 | orchestrator | Thursday 09 April 2026 06:16:45 +0000 (0:00:00.791) 1:05:47.757 ******** 2026-04-09 06:17:08.398827 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:08.398838 | orchestrator | 2026-04-09 06:17:08.398849 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-09 06:17:08.398860 | orchestrator | Thursday 09 April 2026 06:16:46 +0000 (0:00:00.763) 1:05:48.521 ******** 2026-04-09 06:17:08.398870 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:08.398882 | orchestrator | 2026-04-09 06:17:08.398944 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-09 06:17:08.398958 | orchestrator | Thursday 09 April 2026 06:16:47 +0000 (0:00:00.799) 1:05:49.321 ******** 2026-04-09 06:17:08.398969 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:08.398980 | orchestrator | 2026-04-09 06:17:08.398991 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-09 06:17:08.399002 | orchestrator | Thursday 09 April 2026 06:16:48 +0000 (0:00:00.798) 1:05:50.119 ******** 2026-04-09 06:17:08.399013 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:08.399024 | orchestrator | 2026-04-09 06:17:08.399035 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-09 06:17:08.399046 | orchestrator | Thursday 09 April 2026 06:16:49 +0000 (0:00:00.763) 1:05:50.882 ******** 2026-04-09 06:17:08.399056 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:08.399068 | orchestrator | 2026-04-09 06:17:08.399079 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-09 06:17:08.399090 | orchestrator | Thursday 09 April 2026 06:16:49 +0000 (0:00:00.787) 1:05:51.670 ******** 2026-04-09 06:17:08.399101 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:17:08.399112 | orchestrator | 2026-04-09 06:17:08.399123 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-09 06:17:08.399134 | orchestrator | Thursday 09 April 2026 06:16:51 +0000 (0:00:01.590) 1:05:53.261 ******** 2026-04-09 06:17:08.399145 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:17:08.399155 | orchestrator | 2026-04-09 06:17:08.399166 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-09 06:17:08.399177 | orchestrator | Thursday 09 April 2026 06:16:53 +0000 (0:00:01.838) 1:05:55.099 ******** 2026-04-09 06:17:08.399188 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-04-09 06:17:08.399200 | orchestrator | 2026-04-09 06:17:08.399211 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-09 06:17:08.399222 | orchestrator | Thursday 09 April 2026 06:16:54 +0000 (0:00:01.261) 1:05:56.361 ******** 2026-04-09 06:17:08.399240 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:08.399252 | orchestrator | 2026-04-09 06:17:08.399262 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-09 06:17:08.399273 | orchestrator | Thursday 09 April 2026 06:16:55 +0000 (0:00:01.128) 1:05:57.490 ******** 2026-04-09 06:17:08.399284 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:08.399295 | orchestrator | 2026-04-09 06:17:08.399306 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-09 06:17:08.399317 | orchestrator | Thursday 09 April 2026 06:16:56 +0000 (0:00:01.130) 1:05:58.621 ******** 2026-04-09 06:17:08.399328 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 06:17:08.399339 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 06:17:08.399350 | orchestrator | 2026-04-09 06:17:08.399361 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-09 06:17:08.399371 | orchestrator | Thursday 09 April 2026 06:16:58 +0000 (0:00:01.857) 1:06:00.479 ******** 2026-04-09 06:17:08.399388 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:17:08.399407 | orchestrator | 2026-04-09 06:17:08.399425 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-09 06:17:08.399443 | orchestrator | Thursday 09 April 2026 06:17:00 +0000 (0:00:01.431) 1:06:01.910 ******** 2026-04-09 06:17:08.399456 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:08.399467 | orchestrator | 2026-04-09 06:17:08.399478 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-09 06:17:08.399489 | orchestrator | Thursday 09 April 2026 06:17:01 +0000 (0:00:01.154) 1:06:03.065 ******** 2026-04-09 06:17:08.399500 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:08.399510 | orchestrator | 2026-04-09 06:17:08.399521 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-09 06:17:08.399538 | orchestrator | Thursday 09 April 2026 06:17:01 +0000 (0:00:00.794) 1:06:03.860 ******** 2026-04-09 06:17:08.399549 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:08.399559 | orchestrator | 2026-04-09 06:17:08.399570 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-09 06:17:08.399581 | orchestrator | Thursday 09 April 2026 06:17:02 +0000 (0:00:00.793) 1:06:04.654 ******** 2026-04-09 06:17:08.399592 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-04-09 06:17:08.399603 | orchestrator | 2026-04-09 06:17:08.399614 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-09 06:17:08.399625 | orchestrator | Thursday 09 April 2026 06:17:03 +0000 (0:00:01.134) 1:06:05.788 ******** 2026-04-09 06:17:08.399635 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:17:08.399646 | orchestrator | 2026-04-09 06:17:08.399657 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-09 06:17:08.399668 | orchestrator | Thursday 09 April 2026 06:17:05 +0000 (0:00:01.904) 1:06:07.693 ******** 2026-04-09 06:17:08.399679 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 06:17:08.399690 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 06:17:08.399701 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 06:17:08.399711 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:08.399723 | orchestrator | 2026-04-09 06:17:08.399733 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-09 06:17:08.399744 | orchestrator | Thursday 09 April 2026 06:17:07 +0000 (0:00:01.228) 1:06:08.922 ******** 2026-04-09 06:17:08.399755 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:08.399765 | orchestrator | 2026-04-09 06:17:08.399776 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-09 06:17:08.399787 | orchestrator | Thursday 09 April 2026 06:17:08 +0000 (0:00:01.171) 1:06:10.093 ******** 2026-04-09 06:17:08.399805 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:08.399816 | orchestrator | 2026-04-09 06:17:08.399835 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-09 06:17:51.378814 | orchestrator | Thursday 09 April 2026 06:17:09 +0000 (0:00:01.172) 1:06:11.265 ******** 2026-04-09 06:17:51.378984 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:51.379003 | orchestrator | 2026-04-09 06:17:51.379015 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-09 06:17:51.379025 | orchestrator | Thursday 09 April 2026 06:17:10 +0000 (0:00:01.229) 1:06:12.495 ******** 2026-04-09 06:17:51.379035 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:51.379045 | orchestrator | 2026-04-09 06:17:51.379056 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-09 06:17:51.379065 | orchestrator | Thursday 09 April 2026 06:17:11 +0000 (0:00:01.196) 1:06:13.691 ******** 2026-04-09 06:17:51.379075 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:51.379085 | orchestrator | 2026-04-09 06:17:51.379095 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-09 06:17:51.379105 | orchestrator | Thursday 09 April 2026 06:17:12 +0000 (0:00:00.832) 1:06:14.524 ******** 2026-04-09 06:17:51.379114 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:17:51.379125 | orchestrator | 2026-04-09 06:17:51.379135 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-09 06:17:51.379146 | orchestrator | Thursday 09 April 2026 06:17:14 +0000 (0:00:02.106) 1:06:16.631 ******** 2026-04-09 06:17:51.379155 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:17:51.379166 | orchestrator | 2026-04-09 06:17:51.379176 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-09 06:17:51.379186 | orchestrator | Thursday 09 April 2026 06:17:15 +0000 (0:00:00.796) 1:06:17.428 ******** 2026-04-09 06:17:51.379196 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-04-09 06:17:51.379212 | orchestrator | 2026-04-09 06:17:51.379229 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-09 06:17:51.379246 | orchestrator | Thursday 09 April 2026 06:17:16 +0000 (0:00:01.099) 1:06:18.528 ******** 2026-04-09 06:17:51.379261 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:51.379277 | orchestrator | 2026-04-09 06:17:51.379293 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-09 06:17:51.379311 | orchestrator | Thursday 09 April 2026 06:17:17 +0000 (0:00:01.140) 1:06:19.669 ******** 2026-04-09 06:17:51.379329 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:51.379346 | orchestrator | 2026-04-09 06:17:51.379364 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-09 06:17:51.379378 | orchestrator | Thursday 09 April 2026 06:17:18 +0000 (0:00:01.152) 1:06:20.822 ******** 2026-04-09 06:17:51.379390 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:51.379402 | orchestrator | 2026-04-09 06:17:51.379413 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-09 06:17:51.379425 | orchestrator | Thursday 09 April 2026 06:17:20 +0000 (0:00:01.162) 1:06:21.985 ******** 2026-04-09 06:17:51.379436 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:51.379448 | orchestrator | 2026-04-09 06:17:51.379460 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-09 06:17:51.379471 | orchestrator | Thursday 09 April 2026 06:17:21 +0000 (0:00:01.135) 1:06:23.120 ******** 2026-04-09 06:17:51.379483 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:51.379494 | orchestrator | 2026-04-09 06:17:51.379506 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-09 06:17:51.379517 | orchestrator | Thursday 09 April 2026 06:17:22 +0000 (0:00:01.165) 1:06:24.286 ******** 2026-04-09 06:17:51.379528 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:51.379539 | orchestrator | 2026-04-09 06:17:51.379550 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-09 06:17:51.379562 | orchestrator | Thursday 09 April 2026 06:17:23 +0000 (0:00:01.185) 1:06:25.472 ******** 2026-04-09 06:17:51.379597 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:51.379609 | orchestrator | 2026-04-09 06:17:51.379634 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-09 06:17:51.379646 | orchestrator | Thursday 09 April 2026 06:17:24 +0000 (0:00:01.127) 1:06:26.599 ******** 2026-04-09 06:17:51.379657 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:51.379670 | orchestrator | 2026-04-09 06:17:51.379681 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-09 06:17:51.379693 | orchestrator | Thursday 09 April 2026 06:17:25 +0000 (0:00:01.180) 1:06:27.779 ******** 2026-04-09 06:17:51.379704 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:17:51.379714 | orchestrator | 2026-04-09 06:17:51.379723 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-09 06:17:51.379733 | orchestrator | Thursday 09 April 2026 06:17:26 +0000 (0:00:00.789) 1:06:28.569 ******** 2026-04-09 06:17:51.379742 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-04-09 06:17:51.379753 | orchestrator | 2026-04-09 06:17:51.379762 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-09 06:17:51.379772 | orchestrator | Thursday 09 April 2026 06:17:27 +0000 (0:00:01.125) 1:06:29.694 ******** 2026-04-09 06:17:51.379781 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-04-09 06:17:51.379792 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-09 06:17:51.379801 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-09 06:17:51.379811 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-09 06:17:51.379820 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-09 06:17:51.379830 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-09 06:17:51.379839 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-09 06:17:51.379849 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-09 06:17:51.379859 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 06:17:51.379868 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 06:17:51.379878 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 06:17:51.379903 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 06:17:51.379914 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 06:17:51.379949 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 06:17:51.379959 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-04-09 06:17:51.379969 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-04-09 06:17:51.379979 | orchestrator | 2026-04-09 06:17:51.379989 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-09 06:17:51.379998 | orchestrator | Thursday 09 April 2026 06:17:33 +0000 (0:00:06.145) 1:06:35.839 ******** 2026-04-09 06:17:51.380008 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-04-09 06:17:51.380018 | orchestrator | 2026-04-09 06:17:51.380027 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-09 06:17:51.380037 | orchestrator | Thursday 09 April 2026 06:17:35 +0000 (0:00:01.156) 1:06:36.996 ******** 2026-04-09 06:17:51.380047 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 06:17:51.380058 | orchestrator | 2026-04-09 06:17:51.380068 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-09 06:17:51.380078 | orchestrator | Thursday 09 April 2026 06:17:36 +0000 (0:00:01.497) 1:06:38.494 ******** 2026-04-09 06:17:51.380088 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 06:17:51.380105 | orchestrator | 2026-04-09 06:17:51.380115 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-09 06:17:51.380124 | orchestrator | Thursday 09 April 2026 06:17:38 +0000 (0:00:01.636) 1:06:40.131 ******** 2026-04-09 06:17:51.380134 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:51.380144 | orchestrator | 2026-04-09 06:17:51.380154 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-09 06:17:51.380163 | orchestrator | Thursday 09 April 2026 06:17:39 +0000 (0:00:00.782) 1:06:40.914 ******** 2026-04-09 06:17:51.380173 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:51.380183 | orchestrator | 2026-04-09 06:17:51.380193 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-09 06:17:51.380208 | orchestrator | Thursday 09 April 2026 06:17:39 +0000 (0:00:00.863) 1:06:41.777 ******** 2026-04-09 06:17:51.380224 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:51.380240 | orchestrator | 2026-04-09 06:17:51.380255 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-09 06:17:51.380272 | orchestrator | Thursday 09 April 2026 06:17:40 +0000 (0:00:00.798) 1:06:42.575 ******** 2026-04-09 06:17:51.380289 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:51.380304 | orchestrator | 2026-04-09 06:17:51.380319 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-09 06:17:51.380329 | orchestrator | Thursday 09 April 2026 06:17:41 +0000 (0:00:00.803) 1:06:43.379 ******** 2026-04-09 06:17:51.380339 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:51.380349 | orchestrator | 2026-04-09 06:17:51.380359 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-09 06:17:51.380369 | orchestrator | Thursday 09 April 2026 06:17:42 +0000 (0:00:00.849) 1:06:44.228 ******** 2026-04-09 06:17:51.380379 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:51.380388 | orchestrator | 2026-04-09 06:17:51.380398 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-09 06:17:51.380408 | orchestrator | Thursday 09 April 2026 06:17:43 +0000 (0:00:00.796) 1:06:45.025 ******** 2026-04-09 06:17:51.380418 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:51.380433 | orchestrator | 2026-04-09 06:17:51.380443 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-09 06:17:51.380453 | orchestrator | Thursday 09 April 2026 06:17:43 +0000 (0:00:00.769) 1:06:45.795 ******** 2026-04-09 06:17:51.380463 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:51.380473 | orchestrator | 2026-04-09 06:17:51.380483 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-09 06:17:51.380493 | orchestrator | Thursday 09 April 2026 06:17:44 +0000 (0:00:00.806) 1:06:46.601 ******** 2026-04-09 06:17:51.380502 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:51.380512 | orchestrator | 2026-04-09 06:17:51.380522 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-09 06:17:51.380532 | orchestrator | Thursday 09 April 2026 06:17:45 +0000 (0:00:00.820) 1:06:47.422 ******** 2026-04-09 06:17:51.380542 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:51.380551 | orchestrator | 2026-04-09 06:17:51.380561 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-09 06:17:51.380571 | orchestrator | Thursday 09 April 2026 06:17:46 +0000 (0:00:00.766) 1:06:48.188 ******** 2026-04-09 06:17:51.380581 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:17:51.380590 | orchestrator | 2026-04-09 06:17:51.380600 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-09 06:17:51.380610 | orchestrator | Thursday 09 April 2026 06:17:47 +0000 (0:00:00.826) 1:06:49.015 ******** 2026-04-09 06:17:51.380619 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-04-09 06:17:51.380629 | orchestrator | 2026-04-09 06:17:51.380639 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-09 06:17:51.380648 | orchestrator | Thursday 09 April 2026 06:17:51 +0000 (0:00:04.000) 1:06:53.015 ******** 2026-04-09 06:17:51.380666 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 06:17:51.380676 | orchestrator | 2026-04-09 06:17:51.380693 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-09 06:18:31.928301 | orchestrator | Thursday 09 April 2026 06:17:52 +0000 (0:00:00.878) 1:06:53.894 ******** 2026-04-09 06:18:31.928452 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-04-09 06:18:31.928485 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-04-09 06:18:31.928507 | orchestrator | 2026-04-09 06:18:31.928529 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-09 06:18:31.928548 | orchestrator | Thursday 09 April 2026 06:17:56 +0000 (0:00:04.747) 1:06:58.642 ******** 2026-04-09 06:18:31.928565 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:18:31.928578 | orchestrator | 2026-04-09 06:18:31.928590 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-09 06:18:31.928601 | orchestrator | Thursday 09 April 2026 06:17:57 +0000 (0:00:00.848) 1:06:59.491 ******** 2026-04-09 06:18:31.928612 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:18:31.928623 | orchestrator | 2026-04-09 06:18:31.928635 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 06:18:31.928648 | orchestrator | Thursday 09 April 2026 06:17:58 +0000 (0:00:00.771) 1:07:00.263 ******** 2026-04-09 06:18:31.928659 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:18:31.928670 | orchestrator | 2026-04-09 06:18:31.928681 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 06:18:31.928692 | orchestrator | Thursday 09 April 2026 06:17:59 +0000 (0:00:00.812) 1:07:01.075 ******** 2026-04-09 06:18:31.928703 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:18:31.928714 | orchestrator | 2026-04-09 06:18:31.928725 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 06:18:31.928736 | orchestrator | Thursday 09 April 2026 06:17:59 +0000 (0:00:00.783) 1:07:01.859 ******** 2026-04-09 06:18:31.928746 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:18:31.928762 | orchestrator | 2026-04-09 06:18:31.928780 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 06:18:31.928799 | orchestrator | Thursday 09 April 2026 06:18:00 +0000 (0:00:00.794) 1:07:02.653 ******** 2026-04-09 06:18:31.928818 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:18:31.928839 | orchestrator | 2026-04-09 06:18:31.928859 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 06:18:31.928880 | orchestrator | Thursday 09 April 2026 06:18:01 +0000 (0:00:00.867) 1:07:03.521 ******** 2026-04-09 06:18:31.928898 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-09 06:18:31.928913 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-09 06:18:31.928926 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-09 06:18:31.928975 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:18:31.928987 | orchestrator | 2026-04-09 06:18:31.928998 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 06:18:31.929009 | orchestrator | Thursday 09 April 2026 06:18:02 +0000 (0:00:01.090) 1:07:04.612 ******** 2026-04-09 06:18:31.929037 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-09 06:18:31.929071 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-09 06:18:31.929083 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-09 06:18:31.929103 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:18:31.929120 | orchestrator | 2026-04-09 06:18:31.929139 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 06:18:31.929158 | orchestrator | Thursday 09 April 2026 06:18:03 +0000 (0:00:01.041) 1:07:05.654 ******** 2026-04-09 06:18:31.929177 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-09 06:18:31.929197 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-09 06:18:31.929215 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-09 06:18:31.929235 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:18:31.929253 | orchestrator | 2026-04-09 06:18:31.929269 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 06:18:31.929280 | orchestrator | Thursday 09 April 2026 06:18:04 +0000 (0:00:01.057) 1:07:06.711 ******** 2026-04-09 06:18:31.929291 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:18:31.929302 | orchestrator | 2026-04-09 06:18:31.929313 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 06:18:31.929324 | orchestrator | Thursday 09 April 2026 06:18:05 +0000 (0:00:00.840) 1:07:07.552 ******** 2026-04-09 06:18:31.929334 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-09 06:18:31.929345 | orchestrator | 2026-04-09 06:18:31.929356 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-09 06:18:31.929366 | orchestrator | Thursday 09 April 2026 06:18:06 +0000 (0:00:00.991) 1:07:08.543 ******** 2026-04-09 06:18:31.929377 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:18:31.929388 | orchestrator | 2026-04-09 06:18:31.929398 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-09 06:18:31.929409 | orchestrator | Thursday 09 April 2026 06:18:08 +0000 (0:00:01.526) 1:07:10.070 ******** 2026-04-09 06:18:31.929420 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-4 2026-04-09 06:18:31.929431 | orchestrator | 2026-04-09 06:18:31.929463 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-09 06:18:31.929475 | orchestrator | Thursday 09 April 2026 06:18:09 +0000 (0:00:01.134) 1:07:11.205 ******** 2026-04-09 06:18:31.929492 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 06:18:31.929510 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-09 06:18:31.929531 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 06:18:31.929549 | orchestrator | 2026-04-09 06:18:31.929565 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-09 06:18:31.929577 | orchestrator | Thursday 09 April 2026 06:18:12 +0000 (0:00:03.295) 1:07:14.500 ******** 2026-04-09 06:18:31.929587 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-09 06:18:31.929598 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-09 06:18:31.929609 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:18:31.929629 | orchestrator | 2026-04-09 06:18:31.929647 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-09 06:18:31.929666 | orchestrator | Thursday 09 April 2026 06:18:14 +0000 (0:00:02.014) 1:07:16.514 ******** 2026-04-09 06:18:31.929685 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:18:31.929696 | orchestrator | 2026-04-09 06:18:31.929707 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-09 06:18:31.929718 | orchestrator | Thursday 09 April 2026 06:18:15 +0000 (0:00:00.788) 1:07:17.303 ******** 2026-04-09 06:18:31.929729 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-4 2026-04-09 06:18:31.929741 | orchestrator | 2026-04-09 06:18:31.929752 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-09 06:18:31.929762 | orchestrator | Thursday 09 April 2026 06:18:16 +0000 (0:00:01.127) 1:07:18.431 ******** 2026-04-09 06:18:31.929784 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 06:18:31.929797 | orchestrator | 2026-04-09 06:18:31.929811 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-09 06:18:31.929830 | orchestrator | Thursday 09 April 2026 06:18:18 +0000 (0:00:01.698) 1:07:20.130 ******** 2026-04-09 06:18:31.929850 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 06:18:31.929869 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-09 06:18:31.929883 | orchestrator | 2026-04-09 06:18:31.930354 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-09 06:18:31.930375 | orchestrator | Thursday 09 April 2026 06:18:23 +0000 (0:00:05.247) 1:07:25.377 ******** 2026-04-09 06:18:31.930386 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 06:18:31.930397 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 06:18:31.930408 | orchestrator | 2026-04-09 06:18:31.930419 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-09 06:18:31.930430 | orchestrator | Thursday 09 April 2026 06:18:26 +0000 (0:00:03.163) 1:07:28.541 ******** 2026-04-09 06:18:31.930448 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-09 06:18:31.930466 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:18:31.930485 | orchestrator | 2026-04-09 06:18:31.930504 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-09 06:18:31.930522 | orchestrator | Thursday 09 April 2026 06:18:28 +0000 (0:00:01.621) 1:07:30.162 ******** 2026-04-09 06:18:31.930540 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-4 2026-04-09 06:18:31.930560 | orchestrator | 2026-04-09 06:18:31.930590 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-09 06:18:31.930610 | orchestrator | Thursday 09 April 2026 06:18:29 +0000 (0:00:01.133) 1:07:31.295 ******** 2026-04-09 06:18:31.930622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:18:31.930634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:18:31.930645 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:18:31.930656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:18:31.930667 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:18:31.930678 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:18:31.930689 | orchestrator | 2026-04-09 06:18:31.930699 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-09 06:18:31.930710 | orchestrator | Thursday 09 April 2026 06:18:31 +0000 (0:00:02.069) 1:07:33.365 ******** 2026-04-09 06:18:31.930721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:18:31.930732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:18:31.930743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:18:31.930768 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:19:39.532324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:19:39.532463 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:19:39.532481 | orchestrator | 2026-04-09 06:19:39.532510 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-09 06:19:39.532524 | orchestrator | Thursday 09 April 2026 06:18:33 +0000 (0:00:01.626) 1:07:34.991 ******** 2026-04-09 06:19:39.532535 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 06:19:39.532548 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 06:19:39.532559 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 06:19:39.532570 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 06:19:39.532583 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 06:19:39.532594 | orchestrator | 2026-04-09 06:19:39.532605 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-09 06:19:39.532616 | orchestrator | Thursday 09 April 2026 06:19:05 +0000 (0:00:32.237) 1:08:07.229 ******** 2026-04-09 06:19:39.532627 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:19:39.532638 | orchestrator | 2026-04-09 06:19:39.532649 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-09 06:19:39.532660 | orchestrator | Thursday 09 April 2026 06:19:06 +0000 (0:00:00.762) 1:08:07.992 ******** 2026-04-09 06:19:39.532671 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:19:39.532682 | orchestrator | 2026-04-09 06:19:39.532693 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-09 06:19:39.532704 | orchestrator | Thursday 09 April 2026 06:19:06 +0000 (0:00:00.773) 1:08:08.766 ******** 2026-04-09 06:19:39.532715 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-4 2026-04-09 06:19:39.532727 | orchestrator | 2026-04-09 06:19:39.532739 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-09 06:19:39.532750 | orchestrator | Thursday 09 April 2026 06:19:08 +0000 (0:00:01.124) 1:08:09.890 ******** 2026-04-09 06:19:39.532761 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-4 2026-04-09 06:19:39.532772 | orchestrator | 2026-04-09 06:19:39.532783 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-09 06:19:39.532794 | orchestrator | Thursday 09 April 2026 06:19:09 +0000 (0:00:01.103) 1:08:10.993 ******** 2026-04-09 06:19:39.532805 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:19:39.532817 | orchestrator | 2026-04-09 06:19:39.532828 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-09 06:19:39.532839 | orchestrator | Thursday 09 April 2026 06:19:11 +0000 (0:00:02.091) 1:08:13.085 ******** 2026-04-09 06:19:39.532850 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:19:39.532863 | orchestrator | 2026-04-09 06:19:39.532876 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-09 06:19:39.532903 | orchestrator | Thursday 09 April 2026 06:19:13 +0000 (0:00:02.028) 1:08:15.113 ******** 2026-04-09 06:19:39.532916 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:19:39.532930 | orchestrator | 2026-04-09 06:19:39.532943 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-09 06:19:39.533024 | orchestrator | Thursday 09 April 2026 06:19:15 +0000 (0:00:02.291) 1:08:17.405 ******** 2026-04-09 06:19:39.533039 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 06:19:39.533061 | orchestrator | 2026-04-09 06:19:39.533074 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-04-09 06:19:39.533087 | orchestrator | 2026-04-09 06:19:39.533100 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 06:19:39.533113 | orchestrator | Thursday 09 April 2026 06:19:18 +0000 (0:00:03.138) 1:08:20.543 ******** 2026-04-09 06:19:39.533126 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-04-09 06:19:39.533137 | orchestrator | 2026-04-09 06:19:39.533148 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-09 06:19:39.533159 | orchestrator | Thursday 09 April 2026 06:19:19 +0000 (0:00:01.094) 1:08:21.638 ******** 2026-04-09 06:19:39.533170 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:19:39.533181 | orchestrator | 2026-04-09 06:19:39.533192 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-09 06:19:39.533203 | orchestrator | Thursday 09 April 2026 06:19:21 +0000 (0:00:01.450) 1:08:23.089 ******** 2026-04-09 06:19:39.533214 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:19:39.533224 | orchestrator | 2026-04-09 06:19:39.533235 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 06:19:39.533246 | orchestrator | Thursday 09 April 2026 06:19:22 +0000 (0:00:01.161) 1:08:24.251 ******** 2026-04-09 06:19:39.533257 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:19:39.533268 | orchestrator | 2026-04-09 06:19:39.533279 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 06:19:39.533290 | orchestrator | Thursday 09 April 2026 06:19:23 +0000 (0:00:01.465) 1:08:25.716 ******** 2026-04-09 06:19:39.533300 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:19:39.533320 | orchestrator | 2026-04-09 06:19:39.533365 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-09 06:19:39.533391 | orchestrator | Thursday 09 April 2026 06:19:25 +0000 (0:00:01.184) 1:08:26.901 ******** 2026-04-09 06:19:39.533409 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:19:39.533427 | orchestrator | 2026-04-09 06:19:39.533446 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-09 06:19:39.533466 | orchestrator | Thursday 09 April 2026 06:19:26 +0000 (0:00:01.190) 1:08:28.091 ******** 2026-04-09 06:19:39.533485 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:19:39.533504 | orchestrator | 2026-04-09 06:19:39.533520 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-09 06:19:39.533531 | orchestrator | Thursday 09 April 2026 06:19:27 +0000 (0:00:01.168) 1:08:29.259 ******** 2026-04-09 06:19:39.533542 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:19:39.533553 | orchestrator | 2026-04-09 06:19:39.533563 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-09 06:19:39.533574 | orchestrator | Thursday 09 April 2026 06:19:28 +0000 (0:00:01.155) 1:08:30.414 ******** 2026-04-09 06:19:39.533585 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:19:39.533596 | orchestrator | 2026-04-09 06:19:39.533607 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-09 06:19:39.533618 | orchestrator | Thursday 09 April 2026 06:19:29 +0000 (0:00:01.138) 1:08:31.552 ******** 2026-04-09 06:19:39.533629 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 06:19:39.533640 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 06:19:39.533651 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 06:19:39.533661 | orchestrator | 2026-04-09 06:19:39.533672 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-09 06:19:39.533683 | orchestrator | Thursday 09 April 2026 06:19:31 +0000 (0:00:02.099) 1:08:33.652 ******** 2026-04-09 06:19:39.533694 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:19:39.533705 | orchestrator | 2026-04-09 06:19:39.533716 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-09 06:19:39.533726 | orchestrator | Thursday 09 April 2026 06:19:33 +0000 (0:00:01.664) 1:08:35.316 ******** 2026-04-09 06:19:39.533747 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 06:19:39.533758 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 06:19:39.533769 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 06:19:39.533780 | orchestrator | 2026-04-09 06:19:39.533791 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-09 06:19:39.533802 | orchestrator | Thursday 09 April 2026 06:19:36 +0000 (0:00:02.868) 1:08:38.185 ******** 2026-04-09 06:19:39.533813 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-09 06:19:39.533825 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-09 06:19:39.533836 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-09 06:19:39.533847 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:19:39.533857 | orchestrator | 2026-04-09 06:19:39.533868 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-09 06:19:39.533879 | orchestrator | Thursday 09 April 2026 06:19:37 +0000 (0:00:01.465) 1:08:39.650 ******** 2026-04-09 06:19:39.533899 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 06:19:39.533914 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 06:19:39.533925 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 06:19:39.533936 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:19:39.533970 | orchestrator | 2026-04-09 06:19:39.533982 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-09 06:19:39.533994 | orchestrator | Thursday 09 April 2026 06:19:39 +0000 (0:00:01.666) 1:08:41.317 ******** 2026-04-09 06:19:39.534007 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 06:19:39.534084 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 06:19:59.804323 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 06:19:59.804471 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:19:59.804499 | orchestrator | 2026-04-09 06:19:59.804522 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-09 06:19:59.804543 | orchestrator | Thursday 09 April 2026 06:19:40 +0000 (0:00:01.186) 1:08:42.504 ******** 2026-04-09 06:19:59.804603 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '69d38aa54653', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-09 06:19:34.000718', 'end': '2026-04-09 06:19:34.068434', 'delta': '0:00:00.067716', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['69d38aa54653'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-09 06:19:59.804628 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '3e7867c40460', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-09 06:19:34.583918', 'end': '2026-04-09 06:19:34.629767', 'delta': '0:00:00.045849', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3e7867c40460'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-09 06:19:59.804665 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '5ed6058fb18c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-09 06:19:35.119644', 'end': '2026-04-09 06:19:35.168770', 'delta': '0:00:00.049126', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5ed6058fb18c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-09 06:19:59.804686 | orchestrator | 2026-04-09 06:19:59.804705 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-09 06:19:59.804724 | orchestrator | Thursday 09 April 2026 06:19:41 +0000 (0:00:01.207) 1:08:43.712 ******** 2026-04-09 06:19:59.804742 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:19:59.804761 | orchestrator | 2026-04-09 06:19:59.804773 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-09 06:19:59.804784 | orchestrator | Thursday 09 April 2026 06:19:43 +0000 (0:00:01.278) 1:08:44.991 ******** 2026-04-09 06:19:59.804795 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:19:59.804806 | orchestrator | 2026-04-09 06:19:59.804817 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-09 06:19:59.804831 | orchestrator | Thursday 09 April 2026 06:19:44 +0000 (0:00:01.206) 1:08:46.197 ******** 2026-04-09 06:19:59.804843 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:19:59.804855 | orchestrator | 2026-04-09 06:19:59.804868 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-09 06:19:59.804880 | orchestrator | Thursday 09 April 2026 06:19:45 +0000 (0:00:01.554) 1:08:47.752 ******** 2026-04-09 06:19:59.804893 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-09 06:19:59.804906 | orchestrator | 2026-04-09 06:19:59.804918 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 06:19:59.804931 | orchestrator | Thursday 09 April 2026 06:19:47 +0000 (0:00:02.050) 1:08:49.802 ******** 2026-04-09 06:19:59.804945 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:19:59.804992 | orchestrator | 2026-04-09 06:19:59.805006 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-09 06:19:59.805030 | orchestrator | Thursday 09 April 2026 06:19:49 +0000 (0:00:01.216) 1:08:51.019 ******** 2026-04-09 06:19:59.805064 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:19:59.805076 | orchestrator | 2026-04-09 06:19:59.805089 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-09 06:19:59.805102 | orchestrator | Thursday 09 April 2026 06:19:50 +0000 (0:00:01.171) 1:08:52.190 ******** 2026-04-09 06:19:59.805115 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:19:59.805127 | orchestrator | 2026-04-09 06:19:59.805139 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 06:19:59.805152 | orchestrator | Thursday 09 April 2026 06:19:51 +0000 (0:00:01.239) 1:08:53.430 ******** 2026-04-09 06:19:59.805165 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:19:59.805178 | orchestrator | 2026-04-09 06:19:59.805189 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-09 06:19:59.805200 | orchestrator | Thursday 09 April 2026 06:19:52 +0000 (0:00:01.205) 1:08:54.636 ******** 2026-04-09 06:19:59.805210 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:19:59.805221 | orchestrator | 2026-04-09 06:19:59.805232 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-09 06:19:59.805243 | orchestrator | Thursday 09 April 2026 06:19:53 +0000 (0:00:01.105) 1:08:55.742 ******** 2026-04-09 06:19:59.805253 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:19:59.805264 | orchestrator | 2026-04-09 06:19:59.805275 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-09 06:19:59.805286 | orchestrator | Thursday 09 April 2026 06:19:55 +0000 (0:00:01.215) 1:08:56.958 ******** 2026-04-09 06:19:59.805297 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:19:59.805307 | orchestrator | 2026-04-09 06:19:59.805319 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-09 06:19:59.805329 | orchestrator | Thursday 09 April 2026 06:19:56 +0000 (0:00:01.135) 1:08:58.093 ******** 2026-04-09 06:19:59.805340 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:19:59.805351 | orchestrator | 2026-04-09 06:19:59.805362 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-09 06:19:59.805373 | orchestrator | Thursday 09 April 2026 06:19:57 +0000 (0:00:01.188) 1:08:59.282 ******** 2026-04-09 06:19:59.805383 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:19:59.805394 | orchestrator | 2026-04-09 06:19:59.805405 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-09 06:19:59.805416 | orchestrator | Thursday 09 April 2026 06:19:58 +0000 (0:00:01.107) 1:09:00.389 ******** 2026-04-09 06:19:59.805427 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:19:59.805438 | orchestrator | 2026-04-09 06:19:59.805449 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-09 06:19:59.805459 | orchestrator | Thursday 09 April 2026 06:19:59 +0000 (0:00:01.154) 1:09:01.543 ******** 2026-04-09 06:19:59.805471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:19:59.805490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6-osd--block--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6', 'dm-uuid-LVM-9vxFUtTAtak5ZkuWrKKfGIkUHDKTWX7JDm3eA8Y5r9JAYPgNze2tzfH9lV9upfLN'], 'uuids': ['0d8306b6-b8d9-4741-84fa-e650942907f5'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '82469e2d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Dm3eA8-Y5r9-JAYP-gNze-2tzf-H9lV-9upfLN']}})  2026-04-09 06:19:59.805511 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d', 'scsi-SQEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e55aa834', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 06:19:59.805532 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-mluLyS-UGtI-41vG-BPyx-ooVb-U8x0-Mwltl0', 'scsi-0QEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e', 'scsi-SQEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1aa61eee', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--27c4b53f--c2bf--5253--84b2--9319684e0f9e-osd--block--27c4b53f--c2bf--5253--84b2--9319684e0f9e']}})  2026-04-09 06:19:59.926239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:19:59.926335 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:19:59.926352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-09 06:19:59.926365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:19:59.926391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-IBYDKl-rKL1-8ENK-M1lO-FBMz-Nh1V-9LMTjE', 'dm-uuid-CRYPT-LUKS2-a0c575bd231a435faa33ebc924c5d720-IBYDKl-rKL1-8ENK-M1lO-FBMz-Nh1V-9LMTjE'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-09 06:19:59.926422 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:19:59.926434 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--27c4b53f--c2bf--5253--84b2--9319684e0f9e-osd--block--27c4b53f--c2bf--5253--84b2--9319684e0f9e', 'dm-uuid-LVM-Pc55YnmQ0VktCZR8gyBjVYB68xFeQ7SyIBYDKlrKL18ENKM1lOFBMzNh1V9LMTjE'], 'uuids': ['a0c575bd-231a-435f-aa33-ebc924c5d720'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1aa61eee', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['IBYDKl-rKL1-8ENK-M1lO-FBMz-Nh1V-9LMTjE']}})  2026-04-09 06:19:59.926465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-BrMDw9-eTd2-RE46-4U0W-jzmp-yuNA-1uTxr3', 'scsi-0QEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4', 'scsi-SQEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '82469e2d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6-osd--block--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6']}})  2026-04-09 06:19:59.926476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:19:59.926495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '53e4edfb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part16', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part14', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part15', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part1', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-09 06:19:59.926514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:19:59.926525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-09 06:19:59.926543 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Dm3eA8-Y5r9-JAYP-gNze-2tzf-H9lV-9upfLN', 'dm-uuid-CRYPT-LUKS2-0d8306b6b8d9474184fae650942907f5-Dm3eA8-Y5r9-JAYP-gNze-2tzf-H9lV-9upfLN'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-09 06:20:01.257628 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:20:01.257749 | orchestrator | 2026-04-09 06:20:01.257764 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-09 06:20:01.257778 | orchestrator | Thursday 09 April 2026 06:20:01 +0000 (0:00:01.362) 1:09:02.906 ******** 2026-04-09 06:20:01.257797 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:20:01.257817 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6-osd--block--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6', 'dm-uuid-LVM-9vxFUtTAtak5ZkuWrKKfGIkUHDKTWX7JDm3eA8Y5r9JAYPgNze2tzfH9lV9upfLN'], 'uuids': ['0d8306b6-b8d9-4741-84fa-e650942907f5'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '82469e2d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Dm3eA8-Y5r9-JAYP-gNze-2tzf-H9lV-9upfLN']}}, 'ansible_loop_var': 'item'})  2026-04-09 06:20:01.257852 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d', 'scsi-SQEMU_QEMU_HARDDISK_e55aa834-7a03-4cc6-8559-f68ddba0a04d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e55aa834', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:20:01.257894 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-mluLyS-UGtI-41vG-BPyx-ooVb-U8x0-Mwltl0', 'scsi-0QEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e', 'scsi-SQEMU_QEMU_HARDDISK_1aa61eee-0aa0-422d-af75-f23cbcca004e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1aa61eee', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--27c4b53f--c2bf--5253--84b2--9319684e0f9e-osd--block--27c4b53f--c2bf--5253--84b2--9319684e0f9e']}}, 'ansible_loop_var': 'item'})  2026-04-09 06:20:01.257932 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:20:01.257943 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:20:01.257952 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-09-01-39-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:20:01.258112 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:20:01.258141 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-IBYDKl-rKL1-8ENK-M1lO-FBMz-Nh1V-9LMTjE', 'dm-uuid-CRYPT-LUKS2-a0c575bd231a435faa33ebc924c5d720-IBYDKl-rKL1-8ENK-M1lO-FBMz-Nh1V-9LMTjE'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:20:01.258152 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:20:01.258174 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--27c4b53f--c2bf--5253--84b2--9319684e0f9e-osd--block--27c4b53f--c2bf--5253--84b2--9319684e0f9e', 'dm-uuid-LVM-Pc55YnmQ0VktCZR8gyBjVYB68xFeQ7SyIBYDKlrKL18ENKM1lOFBMzNh1V9LMTjE'], 'uuids': ['a0c575bd-231a-435f-aa33-ebc924c5d720'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1aa61eee', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['IBYDKl-rKL1-8ENK-M1lO-FBMz-Nh1V-9LMTjE']}}, 'ansible_loop_var': 'item'})  2026-04-09 06:20:13.664123 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-BrMDw9-eTd2-RE46-4U0W-jzmp-yuNA-1uTxr3', 'scsi-0QEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4', 'scsi-SQEMU_QEMU_HARDDISK_82469e2d-64d1-4f4a-b9b3-b380ac500ec4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '82469e2d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6-osd--block--07250cb7--fce6--51fa--be28--6bf5f5cf4ef6']}}, 'ansible_loop_var': 'item'})  2026-04-09 06:20:13.664245 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:20:13.664306 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '53e4edfb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part16', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part14', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part15', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part1', 'scsi-SQEMU_QEMU_HARDDISK_53e4edfb-5041-4373-b2f8-2931b10ee965-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:20:13.664341 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:20:13.664355 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:20:13.664368 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Dm3eA8-Y5r9-JAYP-gNze-2tzf-H9lV-9upfLN', 'dm-uuid-CRYPT-LUKS2-0d8306b6b8d9474184fae650942907f5-Dm3eA8-Y5r9-JAYP-gNze-2tzf-H9lV-9upfLN'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-09 06:20:13.664391 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:20:13.664405 | orchestrator | 2026-04-09 06:20:13.664423 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-09 06:20:13.664436 | orchestrator | Thursday 09 April 2026 06:20:02 +0000 (0:00:01.375) 1:09:04.282 ******** 2026-04-09 06:20:13.664447 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:20:13.664459 | orchestrator | 2026-04-09 06:20:13.664470 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-09 06:20:13.664481 | orchestrator | Thursday 09 April 2026 06:20:03 +0000 (0:00:01.460) 1:09:05.742 ******** 2026-04-09 06:20:13.664492 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:20:13.664503 | orchestrator | 2026-04-09 06:20:13.664514 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 06:20:13.664525 | orchestrator | Thursday 09 April 2026 06:20:05 +0000 (0:00:01.129) 1:09:06.872 ******** 2026-04-09 06:20:13.664536 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:20:13.664546 | orchestrator | 2026-04-09 06:20:13.664557 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 06:20:13.664568 | orchestrator | Thursday 09 April 2026 06:20:06 +0000 (0:00:01.476) 1:09:08.349 ******** 2026-04-09 06:20:13.664579 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:20:13.664590 | orchestrator | 2026-04-09 06:20:13.664601 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 06:20:13.664612 | orchestrator | Thursday 09 April 2026 06:20:07 +0000 (0:00:01.155) 1:09:09.504 ******** 2026-04-09 06:20:13.664623 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:20:13.664636 | orchestrator | 2026-04-09 06:20:13.664649 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 06:20:13.664662 | orchestrator | Thursday 09 April 2026 06:20:09 +0000 (0:00:01.729) 1:09:11.234 ******** 2026-04-09 06:20:13.664674 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:20:13.664687 | orchestrator | 2026-04-09 06:20:13.664701 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 06:20:13.664714 | orchestrator | Thursday 09 April 2026 06:20:10 +0000 (0:00:01.166) 1:09:12.400 ******** 2026-04-09 06:20:13.664727 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-09 06:20:13.664740 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-09 06:20:13.664754 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-09 06:20:13.664766 | orchestrator | 2026-04-09 06:20:13.664780 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 06:20:13.664792 | orchestrator | Thursday 09 April 2026 06:20:12 +0000 (0:00:01.714) 1:09:14.114 ******** 2026-04-09 06:20:13.664805 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-09 06:20:13.664818 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-09 06:20:13.664831 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-09 06:20:13.664844 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:20:13.664857 | orchestrator | 2026-04-09 06:20:13.664871 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-09 06:20:13.664883 | orchestrator | Thursday 09 April 2026 06:20:13 +0000 (0:00:01.181) 1:09:15.296 ******** 2026-04-09 06:20:13.664896 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-04-09 06:20:13.664909 | orchestrator | 2026-04-09 06:20:13.664954 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 06:20:56.424259 | orchestrator | Thursday 09 April 2026 06:20:14 +0000 (0:00:01.123) 1:09:16.420 ******** 2026-04-09 06:20:56.424379 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:20:56.424398 | orchestrator | 2026-04-09 06:20:56.424411 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 06:20:56.424423 | orchestrator | Thursday 09 April 2026 06:20:15 +0000 (0:00:01.198) 1:09:17.618 ******** 2026-04-09 06:20:56.424434 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:20:56.424445 | orchestrator | 2026-04-09 06:20:56.424457 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 06:20:56.424468 | orchestrator | Thursday 09 April 2026 06:20:16 +0000 (0:00:01.150) 1:09:18.769 ******** 2026-04-09 06:20:56.424479 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:20:56.424490 | orchestrator | 2026-04-09 06:20:56.424501 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 06:20:56.424512 | orchestrator | Thursday 09 April 2026 06:20:18 +0000 (0:00:01.172) 1:09:19.942 ******** 2026-04-09 06:20:56.424523 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:20:56.424534 | orchestrator | 2026-04-09 06:20:56.424545 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 06:20:56.424556 | orchestrator | Thursday 09 April 2026 06:20:19 +0000 (0:00:01.257) 1:09:21.199 ******** 2026-04-09 06:20:56.424568 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-09 06:20:56.424579 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-09 06:20:56.424590 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-09 06:20:56.424601 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:20:56.424612 | orchestrator | 2026-04-09 06:20:56.424623 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 06:20:56.424634 | orchestrator | Thursday 09 April 2026 06:20:20 +0000 (0:00:01.398) 1:09:22.598 ******** 2026-04-09 06:20:56.424645 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-09 06:20:56.424656 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-09 06:20:56.424667 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-09 06:20:56.424678 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:20:56.424689 | orchestrator | 2026-04-09 06:20:56.424700 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 06:20:56.424711 | orchestrator | Thursday 09 April 2026 06:20:22 +0000 (0:00:01.809) 1:09:24.408 ******** 2026-04-09 06:20:56.424721 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-09 06:20:56.424733 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-09 06:20:56.424794 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-09 06:20:56.424819 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:20:56.424840 | orchestrator | 2026-04-09 06:20:56.424860 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 06:20:56.424877 | orchestrator | Thursday 09 April 2026 06:20:24 +0000 (0:00:01.751) 1:09:26.160 ******** 2026-04-09 06:20:56.424890 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:20:56.424903 | orchestrator | 2026-04-09 06:20:56.424917 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 06:20:56.424930 | orchestrator | Thursday 09 April 2026 06:20:25 +0000 (0:00:01.244) 1:09:27.405 ******** 2026-04-09 06:20:56.424942 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-09 06:20:56.424956 | orchestrator | 2026-04-09 06:20:56.424969 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-09 06:20:56.424981 | orchestrator | Thursday 09 April 2026 06:20:26 +0000 (0:00:01.342) 1:09:28.747 ******** 2026-04-09 06:20:56.424995 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 06:20:56.425009 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 06:20:56.425045 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 06:20:56.425059 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 06:20:56.425072 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 06:20:56.425085 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-09 06:20:56.425098 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 06:20:56.425145 | orchestrator | 2026-04-09 06:20:56.425181 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-09 06:20:56.425201 | orchestrator | Thursday 09 April 2026 06:20:28 +0000 (0:00:01.876) 1:09:30.624 ******** 2026-04-09 06:20:56.425219 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 06:20:56.425237 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 06:20:56.425248 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 06:20:56.425259 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 06:20:56.425269 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 06:20:56.425280 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-09 06:20:56.425291 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 06:20:56.425302 | orchestrator | 2026-04-09 06:20:56.425314 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-04-09 06:20:56.425332 | orchestrator | Thursday 09 April 2026 06:20:31 +0000 (0:00:02.257) 1:09:32.881 ******** 2026-04-09 06:20:56.425350 | orchestrator | changed: [testbed-node-5] 2026-04-09 06:20:56.425368 | orchestrator | 2026-04-09 06:20:56.425407 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-04-09 06:20:56.425425 | orchestrator | Thursday 09 April 2026 06:20:32 +0000 (0:00:01.976) 1:09:34.858 ******** 2026-04-09 06:20:56.425444 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 06:20:56.425465 | orchestrator | 2026-04-09 06:20:56.425485 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-04-09 06:20:56.425503 | orchestrator | Thursday 09 April 2026 06:20:35 +0000 (0:00:02.508) 1:09:37.367 ******** 2026-04-09 06:20:56.425520 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 06:20:56.425531 | orchestrator | 2026-04-09 06:20:56.425542 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 06:20:56.425553 | orchestrator | Thursday 09 April 2026 06:20:37 +0000 (0:00:01.944) 1:09:39.311 ******** 2026-04-09 06:20:56.425564 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-04-09 06:20:56.425575 | orchestrator | 2026-04-09 06:20:56.425586 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 06:20:56.425597 | orchestrator | Thursday 09 April 2026 06:20:38 +0000 (0:00:01.130) 1:09:40.442 ******** 2026-04-09 06:20:56.425608 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-04-09 06:20:56.425619 | orchestrator | 2026-04-09 06:20:56.425630 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 06:20:56.425641 | orchestrator | Thursday 09 April 2026 06:20:39 +0000 (0:00:01.118) 1:09:41.560 ******** 2026-04-09 06:20:56.425652 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:20:56.425663 | orchestrator | 2026-04-09 06:20:56.425674 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 06:20:56.425685 | orchestrator | Thursday 09 April 2026 06:20:40 +0000 (0:00:01.140) 1:09:42.700 ******** 2026-04-09 06:20:56.425707 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:20:56.425718 | orchestrator | 2026-04-09 06:20:56.425729 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 06:20:56.425741 | orchestrator | Thursday 09 April 2026 06:20:42 +0000 (0:00:01.628) 1:09:44.329 ******** 2026-04-09 06:20:56.425782 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:20:56.425801 | orchestrator | 2026-04-09 06:20:56.425819 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 06:20:56.425835 | orchestrator | Thursday 09 April 2026 06:20:43 +0000 (0:00:01.509) 1:09:45.839 ******** 2026-04-09 06:20:56.425853 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:20:56.425870 | orchestrator | 2026-04-09 06:20:56.425887 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 06:20:56.425904 | orchestrator | Thursday 09 April 2026 06:20:45 +0000 (0:00:01.532) 1:09:47.372 ******** 2026-04-09 06:20:56.425923 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:20:56.425941 | orchestrator | 2026-04-09 06:20:56.425957 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 06:20:56.425974 | orchestrator | Thursday 09 April 2026 06:20:46 +0000 (0:00:01.098) 1:09:48.470 ******** 2026-04-09 06:20:56.425991 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:20:56.426007 | orchestrator | 2026-04-09 06:20:56.426110 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 06:20:56.426132 | orchestrator | Thursday 09 April 2026 06:20:47 +0000 (0:00:01.128) 1:09:49.599 ******** 2026-04-09 06:20:56.426150 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:20:56.426169 | orchestrator | 2026-04-09 06:20:56.426189 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 06:20:56.426207 | orchestrator | Thursday 09 April 2026 06:20:48 +0000 (0:00:01.117) 1:09:50.716 ******** 2026-04-09 06:20:56.426225 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:20:56.426243 | orchestrator | 2026-04-09 06:20:56.426262 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 06:20:56.426281 | orchestrator | Thursday 09 April 2026 06:20:50 +0000 (0:00:01.548) 1:09:52.265 ******** 2026-04-09 06:20:56.426300 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:20:56.426318 | orchestrator | 2026-04-09 06:20:56.426336 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 06:20:56.426355 | orchestrator | Thursday 09 April 2026 06:20:51 +0000 (0:00:01.562) 1:09:53.827 ******** 2026-04-09 06:20:56.426373 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:20:56.426391 | orchestrator | 2026-04-09 06:20:56.426409 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 06:20:56.426427 | orchestrator | Thursday 09 April 2026 06:20:52 +0000 (0:00:00.791) 1:09:54.619 ******** 2026-04-09 06:20:56.426483 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:20:56.426503 | orchestrator | 2026-04-09 06:20:56.426522 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 06:20:56.426540 | orchestrator | Thursday 09 April 2026 06:20:53 +0000 (0:00:00.772) 1:09:55.392 ******** 2026-04-09 06:20:56.426558 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:20:56.426576 | orchestrator | 2026-04-09 06:20:56.426593 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 06:20:56.426608 | orchestrator | Thursday 09 April 2026 06:20:54 +0000 (0:00:00.793) 1:09:56.186 ******** 2026-04-09 06:20:56.426625 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:20:56.426642 | orchestrator | 2026-04-09 06:20:56.426662 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 06:20:56.426678 | orchestrator | Thursday 09 April 2026 06:20:55 +0000 (0:00:00.804) 1:09:56.991 ******** 2026-04-09 06:20:56.426693 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:20:56.426710 | orchestrator | 2026-04-09 06:20:56.426725 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 06:20:56.426741 | orchestrator | Thursday 09 April 2026 06:20:55 +0000 (0:00:00.793) 1:09:57.784 ******** 2026-04-09 06:20:56.426799 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:20:56.426817 | orchestrator | 2026-04-09 06:20:56.426852 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 06:21:37.187567 | orchestrator | Thursday 09 April 2026 06:20:56 +0000 (0:00:00.862) 1:09:58.647 ******** 2026-04-09 06:21:37.187736 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.187755 | orchestrator | 2026-04-09 06:21:37.187769 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 06:21:37.187780 | orchestrator | Thursday 09 April 2026 06:20:57 +0000 (0:00:00.789) 1:09:59.437 ******** 2026-04-09 06:21:37.187792 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.187805 | orchestrator | 2026-04-09 06:21:37.187824 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 06:21:37.187843 | orchestrator | Thursday 09 April 2026 06:20:58 +0000 (0:00:00.788) 1:10:00.225 ******** 2026-04-09 06:21:37.187862 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:21:37.187882 | orchestrator | 2026-04-09 06:21:37.187903 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 06:21:37.187923 | orchestrator | Thursday 09 April 2026 06:20:59 +0000 (0:00:00.782) 1:10:01.008 ******** 2026-04-09 06:21:37.187942 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:21:37.187955 | orchestrator | 2026-04-09 06:21:37.187966 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-09 06:21:37.187978 | orchestrator | Thursday 09 April 2026 06:20:59 +0000 (0:00:00.789) 1:10:01.798 ******** 2026-04-09 06:21:37.187989 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.188000 | orchestrator | 2026-04-09 06:21:37.188011 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-09 06:21:37.188022 | orchestrator | Thursday 09 April 2026 06:21:00 +0000 (0:00:00.771) 1:10:02.569 ******** 2026-04-09 06:21:37.188033 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.188044 | orchestrator | 2026-04-09 06:21:37.188055 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-09 06:21:37.188066 | orchestrator | Thursday 09 April 2026 06:21:01 +0000 (0:00:00.780) 1:10:03.350 ******** 2026-04-09 06:21:37.188080 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.188092 | orchestrator | 2026-04-09 06:21:37.188106 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-09 06:21:37.188118 | orchestrator | Thursday 09 April 2026 06:21:02 +0000 (0:00:00.778) 1:10:04.129 ******** 2026-04-09 06:21:37.188131 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.188144 | orchestrator | 2026-04-09 06:21:37.188157 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-09 06:21:37.188170 | orchestrator | Thursday 09 April 2026 06:21:03 +0000 (0:00:00.787) 1:10:04.917 ******** 2026-04-09 06:21:37.188181 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.188192 | orchestrator | 2026-04-09 06:21:37.188219 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-09 06:21:37.188230 | orchestrator | Thursday 09 April 2026 06:21:03 +0000 (0:00:00.778) 1:10:05.695 ******** 2026-04-09 06:21:37.188241 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.188252 | orchestrator | 2026-04-09 06:21:37.188263 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-09 06:21:37.188274 | orchestrator | Thursday 09 April 2026 06:21:04 +0000 (0:00:00.768) 1:10:06.463 ******** 2026-04-09 06:21:37.188285 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.188296 | orchestrator | 2026-04-09 06:21:37.188307 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-09 06:21:37.188319 | orchestrator | Thursday 09 April 2026 06:21:05 +0000 (0:00:00.813) 1:10:07.277 ******** 2026-04-09 06:21:37.188330 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.188341 | orchestrator | 2026-04-09 06:21:37.188351 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-09 06:21:37.188362 | orchestrator | Thursday 09 April 2026 06:21:06 +0000 (0:00:00.761) 1:10:08.039 ******** 2026-04-09 06:21:37.188397 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.188409 | orchestrator | 2026-04-09 06:21:37.188420 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-09 06:21:37.188431 | orchestrator | Thursday 09 April 2026 06:21:07 +0000 (0:00:00.858) 1:10:08.897 ******** 2026-04-09 06:21:37.188441 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.188452 | orchestrator | 2026-04-09 06:21:37.188463 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-09 06:21:37.188474 | orchestrator | Thursday 09 April 2026 06:21:07 +0000 (0:00:00.789) 1:10:09.686 ******** 2026-04-09 06:21:37.188484 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.188495 | orchestrator | 2026-04-09 06:21:37.188506 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-09 06:21:37.188517 | orchestrator | Thursday 09 April 2026 06:21:08 +0000 (0:00:00.771) 1:10:10.458 ******** 2026-04-09 06:21:37.188528 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.188538 | orchestrator | 2026-04-09 06:21:37.188550 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-09 06:21:37.188560 | orchestrator | Thursday 09 April 2026 06:21:09 +0000 (0:00:00.756) 1:10:11.215 ******** 2026-04-09 06:21:37.188571 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:21:37.188582 | orchestrator | 2026-04-09 06:21:37.188593 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-09 06:21:37.188604 | orchestrator | Thursday 09 April 2026 06:21:10 +0000 (0:00:01.605) 1:10:12.820 ******** 2026-04-09 06:21:37.188640 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:21:37.188652 | orchestrator | 2026-04-09 06:21:37.188663 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-09 06:21:37.188674 | orchestrator | Thursday 09 April 2026 06:21:12 +0000 (0:00:01.885) 1:10:14.706 ******** 2026-04-09 06:21:37.188685 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-04-09 06:21:37.188697 | orchestrator | 2026-04-09 06:21:37.188708 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-09 06:21:37.188719 | orchestrator | Thursday 09 April 2026 06:21:13 +0000 (0:00:01.142) 1:10:15.849 ******** 2026-04-09 06:21:37.188730 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.188741 | orchestrator | 2026-04-09 06:21:37.188752 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-09 06:21:37.188783 | orchestrator | Thursday 09 April 2026 06:21:15 +0000 (0:00:01.156) 1:10:17.005 ******** 2026-04-09 06:21:37.188795 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.188811 | orchestrator | 2026-04-09 06:21:37.188830 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-09 06:21:37.188848 | orchestrator | Thursday 09 April 2026 06:21:16 +0000 (0:00:01.168) 1:10:18.174 ******** 2026-04-09 06:21:37.188866 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 06:21:37.188885 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 06:21:37.188903 | orchestrator | 2026-04-09 06:21:37.188922 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-09 06:21:37.188941 | orchestrator | Thursday 09 April 2026 06:21:18 +0000 (0:00:01.859) 1:10:20.034 ******** 2026-04-09 06:21:37.188960 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:21:37.188973 | orchestrator | 2026-04-09 06:21:37.188984 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-09 06:21:37.188995 | orchestrator | Thursday 09 April 2026 06:21:19 +0000 (0:00:01.448) 1:10:21.482 ******** 2026-04-09 06:21:37.189006 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.189017 | orchestrator | 2026-04-09 06:21:37.189028 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-09 06:21:37.189039 | orchestrator | Thursday 09 April 2026 06:21:20 +0000 (0:00:01.192) 1:10:22.674 ******** 2026-04-09 06:21:37.189060 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.189071 | orchestrator | 2026-04-09 06:21:37.189082 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-09 06:21:37.189093 | orchestrator | Thursday 09 April 2026 06:21:21 +0000 (0:00:00.768) 1:10:23.443 ******** 2026-04-09 06:21:37.189104 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.189114 | orchestrator | 2026-04-09 06:21:37.189125 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-09 06:21:37.189136 | orchestrator | Thursday 09 April 2026 06:21:22 +0000 (0:00:00.787) 1:10:24.230 ******** 2026-04-09 06:21:37.189147 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-04-09 06:21:37.189158 | orchestrator | 2026-04-09 06:21:37.189169 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-09 06:21:37.189180 | orchestrator | Thursday 09 April 2026 06:21:23 +0000 (0:00:01.128) 1:10:25.359 ******** 2026-04-09 06:21:37.189191 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:21:37.189201 | orchestrator | 2026-04-09 06:21:37.189212 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-09 06:21:37.189229 | orchestrator | Thursday 09 April 2026 06:21:25 +0000 (0:00:01.877) 1:10:27.237 ******** 2026-04-09 06:21:37.189241 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 06:21:37.189252 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 06:21:37.189262 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 06:21:37.189289 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.189312 | orchestrator | 2026-04-09 06:21:37.189323 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-09 06:21:37.189334 | orchestrator | Thursday 09 April 2026 06:21:26 +0000 (0:00:01.131) 1:10:28.369 ******** 2026-04-09 06:21:37.189345 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.189356 | orchestrator | 2026-04-09 06:21:37.189367 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-09 06:21:37.189378 | orchestrator | Thursday 09 April 2026 06:21:27 +0000 (0:00:01.118) 1:10:29.488 ******** 2026-04-09 06:21:37.189389 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.189400 | orchestrator | 2026-04-09 06:21:37.189411 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-09 06:21:37.189422 | orchestrator | Thursday 09 April 2026 06:21:28 +0000 (0:00:01.267) 1:10:30.755 ******** 2026-04-09 06:21:37.189432 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.189443 | orchestrator | 2026-04-09 06:21:37.189454 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-09 06:21:37.189465 | orchestrator | Thursday 09 April 2026 06:21:30 +0000 (0:00:01.143) 1:10:31.899 ******** 2026-04-09 06:21:37.189476 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.189487 | orchestrator | 2026-04-09 06:21:37.189497 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-09 06:21:37.189508 | orchestrator | Thursday 09 April 2026 06:21:31 +0000 (0:00:01.172) 1:10:33.072 ******** 2026-04-09 06:21:37.189519 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.189530 | orchestrator | 2026-04-09 06:21:37.189541 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-09 06:21:37.189552 | orchestrator | Thursday 09 April 2026 06:21:32 +0000 (0:00:00.837) 1:10:33.909 ******** 2026-04-09 06:21:37.189563 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:21:37.189574 | orchestrator | 2026-04-09 06:21:37.189585 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-09 06:21:37.189595 | orchestrator | Thursday 09 April 2026 06:21:34 +0000 (0:00:02.041) 1:10:35.951 ******** 2026-04-09 06:21:37.189660 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:21:37.189675 | orchestrator | 2026-04-09 06:21:37.189687 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-09 06:21:37.189705 | orchestrator | Thursday 09 April 2026 06:21:34 +0000 (0:00:00.872) 1:10:36.823 ******** 2026-04-09 06:21:37.189717 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-04-09 06:21:37.189728 | orchestrator | 2026-04-09 06:21:37.189739 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-09 06:21:37.189750 | orchestrator | Thursday 09 April 2026 06:21:36 +0000 (0:00:01.101) 1:10:37.925 ******** 2026-04-09 06:21:37.189761 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:21:37.189771 | orchestrator | 2026-04-09 06:21:37.189783 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-09 06:21:37.189803 | orchestrator | Thursday 09 April 2026 06:21:37 +0000 (0:00:01.120) 1:10:39.045 ******** 2026-04-09 06:22:18.206298 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:22:18.206413 | orchestrator | 2026-04-09 06:22:18.206431 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-09 06:22:18.206444 | orchestrator | Thursday 09 April 2026 06:21:38 +0000 (0:00:01.142) 1:10:40.188 ******** 2026-04-09 06:22:18.206456 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:22:18.206467 | orchestrator | 2026-04-09 06:22:18.206547 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-09 06:22:18.206559 | orchestrator | Thursday 09 April 2026 06:21:39 +0000 (0:00:01.137) 1:10:41.325 ******** 2026-04-09 06:22:18.206571 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:22:18.206582 | orchestrator | 2026-04-09 06:22:18.206593 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-09 06:22:18.206605 | orchestrator | Thursday 09 April 2026 06:21:40 +0000 (0:00:01.156) 1:10:42.482 ******** 2026-04-09 06:22:18.206616 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:22:18.206627 | orchestrator | 2026-04-09 06:22:18.206638 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-09 06:22:18.206649 | orchestrator | Thursday 09 April 2026 06:21:41 +0000 (0:00:01.132) 1:10:43.615 ******** 2026-04-09 06:22:18.206660 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:22:18.206671 | orchestrator | 2026-04-09 06:22:18.206683 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-09 06:22:18.206694 | orchestrator | Thursday 09 April 2026 06:21:42 +0000 (0:00:01.144) 1:10:44.759 ******** 2026-04-09 06:22:18.206704 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:22:18.206715 | orchestrator | 2026-04-09 06:22:18.206728 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-09 06:22:18.206739 | orchestrator | Thursday 09 April 2026 06:21:44 +0000 (0:00:01.152) 1:10:45.912 ******** 2026-04-09 06:22:18.206750 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:22:18.206761 | orchestrator | 2026-04-09 06:22:18.206772 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-09 06:22:18.206791 | orchestrator | Thursday 09 April 2026 06:21:45 +0000 (0:00:01.151) 1:10:47.063 ******** 2026-04-09 06:22:18.206809 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:22:18.206828 | orchestrator | 2026-04-09 06:22:18.206849 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-09 06:22:18.206887 | orchestrator | Thursday 09 April 2026 06:21:45 +0000 (0:00:00.788) 1:10:47.852 ******** 2026-04-09 06:22:18.206920 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-04-09 06:22:18.206936 | orchestrator | 2026-04-09 06:22:18.206966 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-09 06:22:18.206980 | orchestrator | Thursday 09 April 2026 06:21:47 +0000 (0:00:01.265) 1:10:49.117 ******** 2026-04-09 06:22:18.206993 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-04-09 06:22:18.207007 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-09 06:22:18.207021 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-09 06:22:18.207034 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-09 06:22:18.207048 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-09 06:22:18.207084 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-09 06:22:18.207097 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-09 06:22:18.207110 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-09 06:22:18.207123 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 06:22:18.207137 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 06:22:18.207150 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 06:22:18.207163 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 06:22:18.207177 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 06:22:18.207190 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 06:22:18.207201 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-04-09 06:22:18.207213 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-04-09 06:22:18.207224 | orchestrator | 2026-04-09 06:22:18.207235 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-09 06:22:18.207246 | orchestrator | Thursday 09 April 2026 06:21:53 +0000 (0:00:06.274) 1:10:55.392 ******** 2026-04-09 06:22:18.207257 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-04-09 06:22:18.207268 | orchestrator | 2026-04-09 06:22:18.207279 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-09 06:22:18.207290 | orchestrator | Thursday 09 April 2026 06:21:54 +0000 (0:00:01.105) 1:10:56.497 ******** 2026-04-09 06:22:18.207301 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 06:22:18.207313 | orchestrator | 2026-04-09 06:22:18.207330 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-09 06:22:18.207348 | orchestrator | Thursday 09 April 2026 06:21:56 +0000 (0:00:01.478) 1:10:57.976 ******** 2026-04-09 06:22:18.207366 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 06:22:18.207384 | orchestrator | 2026-04-09 06:22:18.207402 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-09 06:22:18.207419 | orchestrator | Thursday 09 April 2026 06:21:57 +0000 (0:00:01.636) 1:10:59.612 ******** 2026-04-09 06:22:18.207431 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:22:18.207441 | orchestrator | 2026-04-09 06:22:18.207452 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-09 06:22:18.207503 | orchestrator | Thursday 09 April 2026 06:21:58 +0000 (0:00:00.751) 1:11:00.364 ******** 2026-04-09 06:22:18.207515 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:22:18.207526 | orchestrator | 2026-04-09 06:22:18.207537 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-09 06:22:18.207548 | orchestrator | Thursday 09 April 2026 06:21:59 +0000 (0:00:00.785) 1:11:01.150 ******** 2026-04-09 06:22:18.207559 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:22:18.207570 | orchestrator | 2026-04-09 06:22:18.207581 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-09 06:22:18.207592 | orchestrator | Thursday 09 April 2026 06:22:00 +0000 (0:00:00.742) 1:11:01.893 ******** 2026-04-09 06:22:18.207603 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:22:18.207613 | orchestrator | 2026-04-09 06:22:18.207624 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-09 06:22:18.207636 | orchestrator | Thursday 09 April 2026 06:22:00 +0000 (0:00:00.783) 1:11:02.676 ******** 2026-04-09 06:22:18.207655 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:22:18.207674 | orchestrator | 2026-04-09 06:22:18.207692 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-09 06:22:18.207710 | orchestrator | Thursday 09 April 2026 06:22:01 +0000 (0:00:00.757) 1:11:03.434 ******** 2026-04-09 06:22:18.207743 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:22:18.207762 | orchestrator | 2026-04-09 06:22:18.207776 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-09 06:22:18.207787 | orchestrator | Thursday 09 April 2026 06:22:02 +0000 (0:00:00.837) 1:11:04.271 ******** 2026-04-09 06:22:18.207798 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:22:18.207809 | orchestrator | 2026-04-09 06:22:18.207820 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-09 06:22:18.207831 | orchestrator | Thursday 09 April 2026 06:22:03 +0000 (0:00:00.757) 1:11:05.029 ******** 2026-04-09 06:22:18.207842 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:22:18.207852 | orchestrator | 2026-04-09 06:22:18.207863 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-09 06:22:18.207874 | orchestrator | Thursday 09 April 2026 06:22:03 +0000 (0:00:00.830) 1:11:05.860 ******** 2026-04-09 06:22:18.207885 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:22:18.207896 | orchestrator | 2026-04-09 06:22:18.207907 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-09 06:22:18.207924 | orchestrator | Thursday 09 April 2026 06:22:04 +0000 (0:00:00.803) 1:11:06.664 ******** 2026-04-09 06:22:18.207935 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:22:18.207946 | orchestrator | 2026-04-09 06:22:18.207957 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-09 06:22:18.207968 | orchestrator | Thursday 09 April 2026 06:22:05 +0000 (0:00:00.788) 1:11:07.452 ******** 2026-04-09 06:22:18.207979 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:22:18.207990 | orchestrator | 2026-04-09 06:22:18.208001 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-09 06:22:18.208012 | orchestrator | Thursday 09 April 2026 06:22:06 +0000 (0:00:00.784) 1:11:08.237 ******** 2026-04-09 06:22:18.208022 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-04-09 06:22:18.208033 | orchestrator | 2026-04-09 06:22:18.208044 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-09 06:22:18.208055 | orchestrator | Thursday 09 April 2026 06:22:10 +0000 (0:00:03.999) 1:11:12.236 ******** 2026-04-09 06:22:18.208066 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 06:22:18.208077 | orchestrator | 2026-04-09 06:22:18.208087 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-09 06:22:18.208098 | orchestrator | Thursday 09 April 2026 06:22:11 +0000 (0:00:00.838) 1:11:13.075 ******** 2026-04-09 06:22:18.208111 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-04-09 06:22:18.208126 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-04-09 06:22:18.208138 | orchestrator | 2026-04-09 06:22:18.208149 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-09 06:22:18.208159 | orchestrator | Thursday 09 April 2026 06:22:15 +0000 (0:00:04.528) 1:11:17.604 ******** 2026-04-09 06:22:18.208170 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:22:18.208181 | orchestrator | 2026-04-09 06:22:18.208192 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-09 06:22:18.208203 | orchestrator | Thursday 09 April 2026 06:22:16 +0000 (0:00:00.774) 1:11:18.378 ******** 2026-04-09 06:22:18.208222 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:22:18.208233 | orchestrator | 2026-04-09 06:22:18.208244 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 06:22:18.208255 | orchestrator | Thursday 09 April 2026 06:22:17 +0000 (0:00:00.802) 1:11:19.181 ******** 2026-04-09 06:22:18.208265 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:22:18.208277 | orchestrator | 2026-04-09 06:22:18.208287 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 06:22:18.208307 | orchestrator | Thursday 09 April 2026 06:22:18 +0000 (0:00:00.884) 1:11:20.065 ******** 2026-04-09 06:23:26.909487 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:23:26.909604 | orchestrator | 2026-04-09 06:23:26.909620 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 06:23:26.909633 | orchestrator | Thursday 09 April 2026 06:22:19 +0000 (0:00:00.811) 1:11:20.877 ******** 2026-04-09 06:23:26.909645 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:23:26.909656 | orchestrator | 2026-04-09 06:23:26.909668 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 06:23:26.909679 | orchestrator | Thursday 09 April 2026 06:22:19 +0000 (0:00:00.848) 1:11:21.726 ******** 2026-04-09 06:23:26.909690 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:23:26.909702 | orchestrator | 2026-04-09 06:23:26.909713 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 06:23:26.909724 | orchestrator | Thursday 09 April 2026 06:22:20 +0000 (0:00:00.906) 1:11:22.632 ******** 2026-04-09 06:23:26.909735 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-09 06:23:26.909747 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-09 06:23:26.909758 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-09 06:23:26.909769 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:23:26.909780 | orchestrator | 2026-04-09 06:23:26.909791 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 06:23:26.909802 | orchestrator | Thursday 09 April 2026 06:22:22 +0000 (0:00:01.536) 1:11:24.169 ******** 2026-04-09 06:23:26.909813 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-09 06:23:26.909824 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-09 06:23:26.909835 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-09 06:23:26.909847 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:23:26.909858 | orchestrator | 2026-04-09 06:23:26.909870 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 06:23:26.909881 | orchestrator | Thursday 09 April 2026 06:22:23 +0000 (0:00:01.072) 1:11:25.241 ******** 2026-04-09 06:23:26.909892 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-09 06:23:26.909903 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-09 06:23:26.909914 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-09 06:23:26.909925 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:23:26.909936 | orchestrator | 2026-04-09 06:23:26.909964 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 06:23:26.909976 | orchestrator | Thursday 09 April 2026 06:22:24 +0000 (0:00:01.070) 1:11:26.312 ******** 2026-04-09 06:23:26.909987 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:23:26.909998 | orchestrator | 2026-04-09 06:23:26.910009 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 06:23:26.910086 | orchestrator | Thursday 09 April 2026 06:22:25 +0000 (0:00:00.844) 1:11:27.157 ******** 2026-04-09 06:23:26.910100 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-09 06:23:26.910113 | orchestrator | 2026-04-09 06:23:26.910125 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-09 06:23:26.910138 | orchestrator | Thursday 09 April 2026 06:22:26 +0000 (0:00:00.973) 1:11:28.130 ******** 2026-04-09 06:23:26.910152 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:23:26.910186 | orchestrator | 2026-04-09 06:23:26.910200 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-09 06:23:26.910213 | orchestrator | Thursday 09 April 2026 06:22:27 +0000 (0:00:01.373) 1:11:29.504 ******** 2026-04-09 06:23:26.910226 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-5 2026-04-09 06:23:26.910240 | orchestrator | 2026-04-09 06:23:26.910253 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-09 06:23:26.910265 | orchestrator | Thursday 09 April 2026 06:22:28 +0000 (0:00:01.161) 1:11:30.666 ******** 2026-04-09 06:23:26.910305 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 06:23:26.910319 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-09 06:23:26.910331 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 06:23:26.910345 | orchestrator | 2026-04-09 06:23:26.910358 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-09 06:23:26.910371 | orchestrator | Thursday 09 April 2026 06:22:32 +0000 (0:00:03.309) 1:11:33.975 ******** 2026-04-09 06:23:26.910382 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-09 06:23:26.910394 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-09 06:23:26.910405 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:23:26.910415 | orchestrator | 2026-04-09 06:23:26.910426 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-09 06:23:26.910437 | orchestrator | Thursday 09 April 2026 06:22:34 +0000 (0:00:02.014) 1:11:35.989 ******** 2026-04-09 06:23:26.910448 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:23:26.910458 | orchestrator | 2026-04-09 06:23:26.910469 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-09 06:23:26.910480 | orchestrator | Thursday 09 April 2026 06:22:34 +0000 (0:00:00.779) 1:11:36.768 ******** 2026-04-09 06:23:26.910491 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-5 2026-04-09 06:23:26.910502 | orchestrator | 2026-04-09 06:23:26.910513 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-09 06:23:26.910524 | orchestrator | Thursday 09 April 2026 06:22:36 +0000 (0:00:01.266) 1:11:38.035 ******** 2026-04-09 06:23:26.910536 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 06:23:26.910548 | orchestrator | 2026-04-09 06:23:26.910559 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-09 06:23:26.910570 | orchestrator | Thursday 09 April 2026 06:22:37 +0000 (0:00:01.608) 1:11:39.644 ******** 2026-04-09 06:23:26.910599 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 06:23:26.910611 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-09 06:23:26.910623 | orchestrator | 2026-04-09 06:23:26.910633 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-09 06:23:26.910644 | orchestrator | Thursday 09 April 2026 06:22:43 +0000 (0:00:05.427) 1:11:45.071 ******** 2026-04-09 06:23:26.910655 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 06:23:26.910666 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 06:23:26.910676 | orchestrator | 2026-04-09 06:23:26.910687 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-09 06:23:26.910698 | orchestrator | Thursday 09 April 2026 06:22:46 +0000 (0:00:03.266) 1:11:48.338 ******** 2026-04-09 06:23:26.910709 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-09 06:23:26.910720 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:23:26.910731 | orchestrator | 2026-04-09 06:23:26.910742 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-09 06:23:26.910753 | orchestrator | Thursday 09 April 2026 06:22:48 +0000 (0:00:01.660) 1:11:49.999 ******** 2026-04-09 06:23:26.910772 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-5 2026-04-09 06:23:26.910783 | orchestrator | 2026-04-09 06:23:26.910794 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-09 06:23:26.910804 | orchestrator | Thursday 09 April 2026 06:22:49 +0000 (0:00:01.198) 1:11:51.197 ******** 2026-04-09 06:23:26.910815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:23:26.910827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:23:26.910838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:23:26.910854 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:23:26.910866 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:23:26.910877 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:23:26.910888 | orchestrator | 2026-04-09 06:23:26.910899 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-09 06:23:26.910910 | orchestrator | Thursday 09 April 2026 06:22:50 +0000 (0:00:01.655) 1:11:52.853 ******** 2026-04-09 06:23:26.910921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:23:26.910932 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:23:26.910943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:23:26.910954 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:23:26.910965 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 06:23:26.910975 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:23:26.910986 | orchestrator | 2026-04-09 06:23:26.910997 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-09 06:23:26.911008 | orchestrator | Thursday 09 April 2026 06:22:52 +0000 (0:00:02.013) 1:11:54.866 ******** 2026-04-09 06:23:26.911019 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 06:23:26.911030 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 06:23:26.911041 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 06:23:26.911052 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 06:23:26.911064 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 06:23:26.911075 | orchestrator | 2026-04-09 06:23:26.911086 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-09 06:23:26.911097 | orchestrator | Thursday 09 April 2026 06:23:26 +0000 (0:00:33.084) 1:12:27.951 ******** 2026-04-09 06:23:26.911107 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:23:26.911118 | orchestrator | 2026-04-09 06:23:26.911129 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-09 06:23:26.911153 | orchestrator | Thursday 09 April 2026 06:23:26 +0000 (0:00:00.819) 1:12:28.770 ******** 2026-04-09 06:24:17.585826 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:24:17.585944 | orchestrator | 2026-04-09 06:24:17.585955 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-09 06:24:17.585965 | orchestrator | Thursday 09 April 2026 06:23:27 +0000 (0:00:00.767) 1:12:29.538 ******** 2026-04-09 06:24:17.585973 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-5 2026-04-09 06:24:17.585981 | orchestrator | 2026-04-09 06:24:17.585988 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-09 06:24:17.585996 | orchestrator | Thursday 09 April 2026 06:23:28 +0000 (0:00:01.234) 1:12:30.772 ******** 2026-04-09 06:24:17.586086 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-5 2026-04-09 06:24:17.586095 | orchestrator | 2026-04-09 06:24:17.586103 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-09 06:24:17.586111 | orchestrator | Thursday 09 April 2026 06:23:30 +0000 (0:00:01.116) 1:12:31.889 ******** 2026-04-09 06:24:17.586118 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:24:17.586126 | orchestrator | 2026-04-09 06:24:17.586133 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-09 06:24:17.586162 | orchestrator | Thursday 09 April 2026 06:23:32 +0000 (0:00:02.077) 1:12:33.967 ******** 2026-04-09 06:24:17.586169 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:24:17.586176 | orchestrator | 2026-04-09 06:24:17.586182 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-09 06:24:17.586189 | orchestrator | Thursday 09 April 2026 06:23:33 +0000 (0:00:01.829) 1:12:35.796 ******** 2026-04-09 06:24:17.586197 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:24:17.586204 | orchestrator | 2026-04-09 06:24:17.586211 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-09 06:24:17.586218 | orchestrator | Thursday 09 April 2026 06:23:36 +0000 (0:00:02.218) 1:12:38.015 ******** 2026-04-09 06:24:17.586227 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 06:24:17.586235 | orchestrator | 2026-04-09 06:24:17.586242 | orchestrator | PLAY [Upgrade ceph rbd mirror node] ******************************************** 2026-04-09 06:24:17.586249 | orchestrator | skipping: no hosts matched 2026-04-09 06:24:17.586256 | orchestrator | 2026-04-09 06:24:17.586280 | orchestrator | PLAY [Upgrade ceph nfs node] *************************************************** 2026-04-09 06:24:17.586286 | orchestrator | skipping: no hosts matched 2026-04-09 06:24:17.586292 | orchestrator | 2026-04-09 06:24:17.586299 | orchestrator | PLAY [Upgrade ceph client node] ************************************************ 2026-04-09 06:24:17.586306 | orchestrator | skipping: no hosts matched 2026-04-09 06:24:17.586313 | orchestrator | 2026-04-09 06:24:17.586320 | orchestrator | PLAY [Upgrade ceph-crash daemons] ********************************************** 2026-04-09 06:24:17.586327 | orchestrator | 2026-04-09 06:24:17.586334 | orchestrator | TASK [Stop the ceph-crash service] ********************************************* 2026-04-09 06:24:17.586341 | orchestrator | Thursday 09 April 2026 06:23:40 +0000 (0:00:04.202) 1:12:42.217 ******** 2026-04-09 06:24:17.586349 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:24:17.586356 | orchestrator | changed: [testbed-node-2] 2026-04-09 06:24:17.586363 | orchestrator | changed: [testbed-node-1] 2026-04-09 06:24:17.586370 | orchestrator | changed: [testbed-node-3] 2026-04-09 06:24:17.586377 | orchestrator | changed: [testbed-node-4] 2026-04-09 06:24:17.586384 | orchestrator | changed: [testbed-node-5] 2026-04-09 06:24:17.586394 | orchestrator | 2026-04-09 06:24:17.586405 | orchestrator | TASK [Mask and disable the ceph-crash service] ********************************* 2026-04-09 06:24:17.586416 | orchestrator | Thursday 09 April 2026 06:23:43 +0000 (0:00:02.840) 1:12:45.058 ******** 2026-04-09 06:24:17.586426 | orchestrator | changed: [testbed-node-1] 2026-04-09 06:24:17.586436 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:24:17.586467 | orchestrator | changed: [testbed-node-3] 2026-04-09 06:24:17.586477 | orchestrator | changed: [testbed-node-2] 2026-04-09 06:24:17.586488 | orchestrator | changed: [testbed-node-4] 2026-04-09 06:24:17.586497 | orchestrator | changed: [testbed-node-5] 2026-04-09 06:24:17.586508 | orchestrator | 2026-04-09 06:24:17.586518 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 06:24:17.586527 | orchestrator | Thursday 09 April 2026 06:23:46 +0000 (0:00:03.254) 1:12:48.313 ******** 2026-04-09 06:24:17.586535 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:24:17.586542 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:24:17.586549 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:24:17.586556 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:24:17.586563 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:24:17.586570 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:24:17.586577 | orchestrator | 2026-04-09 06:24:17.586584 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 06:24:17.586591 | orchestrator | Thursday 09 April 2026 06:23:48 +0000 (0:00:02.128) 1:12:50.441 ******** 2026-04-09 06:24:17.586598 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:24:17.586608 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:24:17.586615 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:24:17.586622 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:24:17.586629 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:24:17.586636 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:24:17.586643 | orchestrator | 2026-04-09 06:24:17.586650 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 06:24:17.586657 | orchestrator | Thursday 09 April 2026 06:23:50 +0000 (0:00:02.320) 1:12:52.761 ******** 2026-04-09 06:24:17.586665 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 06:24:17.586674 | orchestrator | 2026-04-09 06:24:17.586681 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 06:24:17.586688 | orchestrator | Thursday 09 April 2026 06:23:53 +0000 (0:00:02.190) 1:12:54.952 ******** 2026-04-09 06:24:17.586695 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 06:24:17.586702 | orchestrator | 2026-04-09 06:24:17.586723 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 06:24:17.586731 | orchestrator | Thursday 09 April 2026 06:23:55 +0000 (0:00:02.221) 1:12:57.173 ******** 2026-04-09 06:24:17.586738 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:24:17.586745 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:24:17.586752 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:24:17.586759 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:24:17.586766 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:24:17.586773 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:24:17.586779 | orchestrator | 2026-04-09 06:24:17.586786 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 06:24:17.586793 | orchestrator | Thursday 09 April 2026 06:23:57 +0000 (0:00:02.028) 1:12:59.202 ******** 2026-04-09 06:24:17.586800 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:24:17.586807 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:24:17.586814 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:24:17.586821 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:24:17.586829 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:24:17.586836 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:24:17.586843 | orchestrator | 2026-04-09 06:24:17.586850 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 06:24:17.586857 | orchestrator | Thursday 09 April 2026 06:23:59 +0000 (0:00:02.494) 1:13:01.697 ******** 2026-04-09 06:24:17.586864 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:24:17.586871 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:24:17.586883 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:24:17.586890 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:24:17.586897 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:24:17.586904 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:24:17.586911 | orchestrator | 2026-04-09 06:24:17.586918 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 06:24:17.586925 | orchestrator | Thursday 09 April 2026 06:24:02 +0000 (0:00:02.183) 1:13:03.880 ******** 2026-04-09 06:24:17.586932 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:24:17.586939 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:24:17.586946 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:24:17.586953 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:24:17.586960 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:24:17.586967 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:24:17.586974 | orchestrator | 2026-04-09 06:24:17.586981 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 06:24:17.586988 | orchestrator | Thursday 09 April 2026 06:24:04 +0000 (0:00:02.119) 1:13:05.999 ******** 2026-04-09 06:24:17.586999 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:24:17.587006 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:24:17.587013 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:24:17.587020 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:24:17.587027 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:24:17.587034 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:24:17.587041 | orchestrator | 2026-04-09 06:24:17.587048 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 06:24:17.587055 | orchestrator | Thursday 09 April 2026 06:24:06 +0000 (0:00:02.046) 1:13:08.046 ******** 2026-04-09 06:24:17.587062 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:24:17.587069 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:24:17.587076 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:24:17.587083 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:24:17.587090 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:24:17.587097 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:24:17.587103 | orchestrator | 2026-04-09 06:24:17.587110 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 06:24:17.587117 | orchestrator | Thursday 09 April 2026 06:24:07 +0000 (0:00:01.771) 1:13:09.818 ******** 2026-04-09 06:24:17.587124 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:24:17.587131 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:24:17.587150 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:24:17.587157 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:24:17.587164 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:24:17.587171 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:24:17.587178 | orchestrator | 2026-04-09 06:24:17.587184 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 06:24:17.587191 | orchestrator | Thursday 09 April 2026 06:24:10 +0000 (0:00:02.082) 1:13:11.900 ******** 2026-04-09 06:24:17.587198 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:24:17.587205 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:24:17.587212 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:24:17.587218 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:24:17.587225 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:24:17.587232 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:24:17.587239 | orchestrator | 2026-04-09 06:24:17.587245 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 06:24:17.587252 | orchestrator | Thursday 09 April 2026 06:24:12 +0000 (0:00:02.150) 1:13:14.051 ******** 2026-04-09 06:24:17.587259 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:24:17.587266 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:24:17.587272 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:24:17.587278 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:24:17.587284 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:24:17.587291 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:24:17.587298 | orchestrator | 2026-04-09 06:24:17.587305 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 06:24:17.587316 | orchestrator | Thursday 09 April 2026 06:24:14 +0000 (0:00:02.542) 1:13:16.593 ******** 2026-04-09 06:24:17.587323 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:24:17.587330 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:24:17.587337 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:24:17.587343 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:24:17.587350 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:24:17.587357 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:24:17.587364 | orchestrator | 2026-04-09 06:24:17.587370 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 06:24:17.587377 | orchestrator | Thursday 09 April 2026 06:24:16 +0000 (0:00:01.785) 1:13:18.379 ******** 2026-04-09 06:24:17.587384 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:24:17.587391 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:24:17.587397 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:24:17.587404 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:24:17.587411 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:24:17.587418 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:24:17.587425 | orchestrator | 2026-04-09 06:24:17.587435 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 06:25:13.456611 | orchestrator | Thursday 09 April 2026 06:24:18 +0000 (0:00:02.082) 1:13:20.462 ******** 2026-04-09 06:25:13.456718 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:25:13.456732 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:25:13.456743 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:25:13.456752 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:25:13.456761 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:25:13.456770 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:25:13.456779 | orchestrator | 2026-04-09 06:25:13.456789 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 06:25:13.456798 | orchestrator | Thursday 09 April 2026 06:24:20 +0000 (0:00:01.900) 1:13:22.363 ******** 2026-04-09 06:25:13.456807 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:25:13.456815 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:25:13.456824 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:25:13.456833 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:25:13.456842 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:25:13.456850 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:25:13.456859 | orchestrator | 2026-04-09 06:25:13.456868 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 06:25:13.456877 | orchestrator | Thursday 09 April 2026 06:24:22 +0000 (0:00:02.337) 1:13:24.701 ******** 2026-04-09 06:25:13.456886 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:25:13.456895 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:25:13.456903 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:25:13.456942 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:25:13.456951 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:25:13.456960 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:25:13.456969 | orchestrator | 2026-04-09 06:25:13.456978 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 06:25:13.456986 | orchestrator | Thursday 09 April 2026 06:24:24 +0000 (0:00:01.997) 1:13:26.698 ******** 2026-04-09 06:25:13.456996 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:25:13.457054 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:25:13.457065 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:25:13.457075 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:25:13.457083 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:25:13.457098 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:25:13.457113 | orchestrator | 2026-04-09 06:25:13.457127 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 06:25:13.457160 | orchestrator | Thursday 09 April 2026 06:24:26 +0000 (0:00:01.764) 1:13:28.463 ******** 2026-04-09 06:25:13.457176 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:25:13.457217 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:25:13.457233 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:25:13.457248 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:25:13.457263 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:25:13.457277 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:25:13.457292 | orchestrator | 2026-04-09 06:25:13.457307 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 06:25:13.457322 | orchestrator | Thursday 09 April 2026 06:24:28 +0000 (0:00:02.138) 1:13:30.602 ******** 2026-04-09 06:25:13.457338 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:25:13.457352 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:25:13.457362 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:25:13.457372 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:25:13.457383 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:25:13.457393 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:25:13.457403 | orchestrator | 2026-04-09 06:25:13.457414 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 06:25:13.457425 | orchestrator | Thursday 09 April 2026 06:24:30 +0000 (0:00:01.908) 1:13:32.510 ******** 2026-04-09 06:25:13.457437 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:25:13.457448 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:25:13.457459 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:25:13.457470 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:25:13.457481 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:25:13.457492 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:25:13.457504 | orchestrator | 2026-04-09 06:25:13.457516 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 06:25:13.457528 | orchestrator | Thursday 09 April 2026 06:24:32 +0000 (0:00:02.148) 1:13:34.659 ******** 2026-04-09 06:25:13.457537 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:25:13.457547 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:25:13.457556 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:25:13.457566 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:25:13.457575 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:25:13.457585 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:25:13.457594 | orchestrator | 2026-04-09 06:25:13.457604 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-09 06:25:13.457614 | orchestrator | Thursday 09 April 2026 06:24:35 +0000 (0:00:02.322) 1:13:36.981 ******** 2026-04-09 06:25:13.457623 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:25:13.457633 | orchestrator | 2026-04-09 06:25:13.457642 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-09 06:25:13.457652 | orchestrator | Thursday 09 April 2026 06:24:38 +0000 (0:00:03.212) 1:13:40.194 ******** 2026-04-09 06:25:13.457661 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:25:13.457671 | orchestrator | 2026-04-09 06:25:13.457680 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-09 06:25:13.457690 | orchestrator | Thursday 09 April 2026 06:24:41 +0000 (0:00:03.147) 1:13:43.341 ******** 2026-04-09 06:25:13.457699 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:25:13.457709 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:25:13.457718 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:25:13.457728 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:25:13.457737 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:25:13.457747 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:25:13.457756 | orchestrator | 2026-04-09 06:25:13.457766 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-09 06:25:13.457776 | orchestrator | Thursday 09 April 2026 06:24:43 +0000 (0:00:02.498) 1:13:45.840 ******** 2026-04-09 06:25:13.457785 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:25:13.457795 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:25:13.457804 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:25:13.457813 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:25:13.457823 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:25:13.457832 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:25:13.457842 | orchestrator | 2026-04-09 06:25:13.457860 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-09 06:25:13.457888 | orchestrator | Thursday 09 April 2026 06:24:46 +0000 (0:00:02.504) 1:13:48.344 ******** 2026-04-09 06:25:13.457900 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 06:25:13.457911 | orchestrator | 2026-04-09 06:25:13.457921 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-09 06:25:13.457931 | orchestrator | Thursday 09 April 2026 06:24:49 +0000 (0:00:02.534) 1:13:50.878 ******** 2026-04-09 06:25:13.457940 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:25:13.457950 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:25:13.457959 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:25:13.457969 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:25:13.457978 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:25:13.457987 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:25:13.458003 | orchestrator | 2026-04-09 06:25:13.458120 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-09 06:25:13.458131 | orchestrator | Thursday 09 April 2026 06:24:51 +0000 (0:00:02.667) 1:13:53.546 ******** 2026-04-09 06:25:13.458141 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:25:13.458151 | orchestrator | changed: [testbed-node-3] 2026-04-09 06:25:13.458161 | orchestrator | changed: [testbed-node-1] 2026-04-09 06:25:13.458170 | orchestrator | changed: [testbed-node-4] 2026-04-09 06:25:13.458180 | orchestrator | changed: [testbed-node-2] 2026-04-09 06:25:13.458189 | orchestrator | changed: [testbed-node-5] 2026-04-09 06:25:13.458199 | orchestrator | 2026-04-09 06:25:13.458209 | orchestrator | PLAY [Complete upgrade] ******************************************************** 2026-04-09 06:25:13.458219 | orchestrator | 2026-04-09 06:25:13.458228 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 06:25:13.458238 | orchestrator | Thursday 09 April 2026 06:24:56 +0000 (0:00:04.496) 1:13:58.043 ******** 2026-04-09 06:25:13.458248 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:25:13.458257 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:25:13.458267 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:25:13.458276 | orchestrator | 2026-04-09 06:25:13.458286 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 06:25:13.458296 | orchestrator | Thursday 09 April 2026 06:24:57 +0000 (0:00:01.715) 1:13:59.759 ******** 2026-04-09 06:25:13.458305 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:25:13.458315 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:25:13.458324 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:25:13.458341 | orchestrator | 2026-04-09 06:25:13.458351 | orchestrator | TASK [Container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-04-09 06:25:13.458361 | orchestrator | Thursday 09 April 2026 06:24:59 +0000 (0:00:01.382) 1:14:01.142 ******** 2026-04-09 06:25:13.458371 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:25:13.458380 | orchestrator | 2026-04-09 06:25:13.458390 | orchestrator | TASK [Non container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-04-09 06:25:13.458400 | orchestrator | Thursday 09 April 2026 06:25:01 +0000 (0:00:02.317) 1:14:03.459 ******** 2026-04-09 06:25:13.458409 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:25:13.458419 | orchestrator | 2026-04-09 06:25:13.458429 | orchestrator | PLAY [Upgrade node-exporter] *************************************************** 2026-04-09 06:25:13.458438 | orchestrator | 2026-04-09 06:25:13.458448 | orchestrator | TASK [Stop node-exporter] ****************************************************** 2026-04-09 06:25:13.458458 | orchestrator | Thursday 09 April 2026 06:25:03 +0000 (0:00:02.336) 1:14:05.796 ******** 2026-04-09 06:25:13.458467 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:25:13.458477 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:25:13.458486 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:25:13.458496 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:25:13.458505 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:25:13.458515 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:25:13.458533 | orchestrator | skipping: [testbed-manager] 2026-04-09 06:25:13.458542 | orchestrator | 2026-04-09 06:25:13.458552 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 06:25:13.458562 | orchestrator | Thursday 09 April 2026 06:25:05 +0000 (0:00:01.860) 1:14:07.657 ******** 2026-04-09 06:25:13.458571 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:25:13.458581 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:25:13.458590 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:25:13.458600 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:25:13.458610 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:25:13.458619 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:25:13.458629 | orchestrator | skipping: [testbed-manager] 2026-04-09 06:25:13.458638 | orchestrator | 2026-04-09 06:25:13.458648 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-04-09 06:25:13.458658 | orchestrator | Thursday 09 April 2026 06:25:08 +0000 (0:00:02.456) 1:14:10.114 ******** 2026-04-09 06:25:13.458667 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:25:13.458677 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:25:13.458686 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:25:13.458696 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:25:13.458705 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:25:13.458715 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:25:13.458724 | orchestrator | skipping: [testbed-manager] 2026-04-09 06:25:13.458734 | orchestrator | 2026-04-09 06:25:13.458743 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-04-09 06:25:13.458753 | orchestrator | Thursday 09 April 2026 06:25:10 +0000 (0:00:02.171) 1:14:12.286 ******** 2026-04-09 06:25:13.458763 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:25:13.458772 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:25:13.458782 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:25:13.458791 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:25:13.458801 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:25:13.458810 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:25:13.458820 | orchestrator | skipping: [testbed-manager] 2026-04-09 06:25:13.458829 | orchestrator | 2026-04-09 06:25:13.458839 | orchestrator | TASK [ceph-node-exporter : Include setup_container.yml] ************************ 2026-04-09 06:25:13.458848 | orchestrator | Thursday 09 April 2026 06:25:12 +0000 (0:00:02.510) 1:14:14.796 ******** 2026-04-09 06:25:13.458858 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:25:13.458868 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:25:13.458877 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:25:13.458895 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:26:01.455828 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:26:01.456078 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:26:01.456112 | orchestrator | skipping: [testbed-manager] 2026-04-09 06:26:01.456133 | orchestrator | 2026-04-09 06:26:01.456200 | orchestrator | PLAY [Upgrade monitoring node] ************************************************* 2026-04-09 06:26:01.456224 | orchestrator | 2026-04-09 06:26:01.456245 | orchestrator | TASK [Stop monitoring services] ************************************************ 2026-04-09 06:26:01.456266 | orchestrator | Thursday 09 April 2026 06:25:16 +0000 (0:00:03.182) 1:14:17.978 ******** 2026-04-09 06:26:01.456286 | orchestrator | skipping: [testbed-manager] => (item=alertmanager)  2026-04-09 06:26:01.456307 | orchestrator | skipping: [testbed-manager] => (item=prometheus)  2026-04-09 06:26:01.456327 | orchestrator | skipping: [testbed-manager] => (item=grafana-server)  2026-04-09 06:26:01.456350 | orchestrator | skipping: [testbed-manager] 2026-04-09 06:26:01.456373 | orchestrator | 2026-04-09 06:26:01.456393 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-04-09 06:26:01.456415 | orchestrator | Thursday 09 April 2026 06:25:17 +0000 (0:00:01.129) 1:14:19.108 ******** 2026-04-09 06:26:01.456436 | orchestrator | skipping: [testbed-manager] 2026-04-09 06:26:01.456459 | orchestrator | 2026-04-09 06:26:01.456483 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-04-09 06:26:01.456537 | orchestrator | Thursday 09 April 2026 06:25:18 +0000 (0:00:01.113) 1:14:20.221 ******** 2026-04-09 06:26:01.456561 | orchestrator | skipping: [testbed-manager] 2026-04-09 06:26:01.456584 | orchestrator | 2026-04-09 06:26:01.456607 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-04-09 06:26:01.456630 | orchestrator | Thursday 09 April 2026 06:25:19 +0000 (0:00:01.172) 1:14:21.394 ******** 2026-04-09 06:26:01.456650 | orchestrator | skipping: [testbed-manager] 2026-04-09 06:26:01.456671 | orchestrator | 2026-04-09 06:26:01.456693 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-04-09 06:26:01.456713 | orchestrator | Thursday 09 April 2026 06:25:20 +0000 (0:00:01.178) 1:14:22.572 ******** 2026-04-09 06:26:01.456733 | orchestrator | skipping: [testbed-manager] 2026-04-09 06:26:01.456753 | orchestrator | 2026-04-09 06:26:01.456773 | orchestrator | TASK [ceph-prometheus : Create prometheus directories] ************************* 2026-04-09 06:26:01.456793 | orchestrator | Thursday 09 April 2026 06:25:21 +0000 (0:00:01.126) 1:14:23.699 ******** 2026-04-09 06:26:01.456832 | orchestrator | skipping: [testbed-manager] => (item=/etc/prometheus)  2026-04-09 06:26:01.456852 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/prometheus)  2026-04-09 06:26:01.456871 | orchestrator | skipping: [testbed-manager] 2026-04-09 06:26:01.456892 | orchestrator | 2026-04-09 06:26:01.456945 | orchestrator | TASK [ceph-prometheus : Write prometheus config file] ************************** 2026-04-09 06:26:01.456967 | orchestrator | Thursday 09 April 2026 06:25:23 +0000 (0:00:01.308) 1:14:25.007 ******** 2026-04-09 06:26:01.456986 | orchestrator | skipping: [testbed-manager] 2026-04-09 06:26:01.457006 | orchestrator | 2026-04-09 06:26:01.457026 | orchestrator | TASK [ceph-prometheus : Make sure the alerting rules directory exists] ********* 2026-04-09 06:26:01.457046 | orchestrator | Thursday 09 April 2026 06:25:24 +0000 (0:00:01.125) 1:14:26.133 ******** 2026-04-09 06:26:01.457066 | orchestrator | skipping: [testbed-manager] 2026-04-09 06:26:01.457086 | orchestrator | 2026-04-09 06:26:01.457105 | orchestrator | TASK [ceph-prometheus : Copy alerting rules] *********************************** 2026-04-09 06:26:01.457124 | orchestrator | Thursday 09 April 2026 06:25:25 +0000 (0:00:01.237) 1:14:27.370 ******** 2026-04-09 06:26:01.457142 | orchestrator | skipping: [testbed-manager] 2026-04-09 06:26:01.457161 | orchestrator | 2026-04-09 06:26:01.457178 | orchestrator | TASK [ceph-prometheus : Create alertmanager directories] *********************** 2026-04-09 06:26:01.457197 | orchestrator | Thursday 09 April 2026 06:25:26 +0000 (0:00:01.118) 1:14:28.488 ******** 2026-04-09 06:26:01.457214 | orchestrator | skipping: [testbed-manager] => (item=/etc/alertmanager)  2026-04-09 06:26:01.457233 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/alertmanager)  2026-04-09 06:26:01.457252 | orchestrator | skipping: [testbed-manager] 2026-04-09 06:26:01.457269 | orchestrator | 2026-04-09 06:26:01.457287 | orchestrator | TASK [ceph-prometheus : Write alertmanager config file] ************************ 2026-04-09 06:26:01.457306 | orchestrator | Thursday 09 April 2026 06:25:27 +0000 (0:00:01.154) 1:14:29.643 ******** 2026-04-09 06:26:01.457324 | orchestrator | skipping: [testbed-manager] 2026-04-09 06:26:01.457343 | orchestrator | 2026-04-09 06:26:01.457360 | orchestrator | TASK [ceph-prometheus : Include setup_container.yml] *************************** 2026-04-09 06:26:01.457378 | orchestrator | Thursday 09 April 2026 06:25:28 +0000 (0:00:01.166) 1:14:30.809 ******** 2026-04-09 06:26:01.457396 | orchestrator | skipping: [testbed-manager] 2026-04-09 06:26:01.457415 | orchestrator | 2026-04-09 06:26:01.457435 | orchestrator | TASK [ceph-grafana : Include setup_container.yml] ****************************** 2026-04-09 06:26:01.457454 | orchestrator | Thursday 09 April 2026 06:25:30 +0000 (0:00:01.159) 1:14:31.969 ******** 2026-04-09 06:26:01.457475 | orchestrator | skipping: [testbed-manager] 2026-04-09 06:26:01.457495 | orchestrator | 2026-04-09 06:26:01.457515 | orchestrator | TASK [ceph-grafana : Include configure_grafana.yml] **************************** 2026-04-09 06:26:01.457536 | orchestrator | Thursday 09 April 2026 06:25:31 +0000 (0:00:01.122) 1:14:33.092 ******** 2026-04-09 06:26:01.457555 | orchestrator | skipping: [testbed-manager] 2026-04-09 06:26:01.457592 | orchestrator | 2026-04-09 06:26:01.457610 | orchestrator | PLAY [Upgrade ceph dashboard] ************************************************** 2026-04-09 06:26:01.457628 | orchestrator | 2026-04-09 06:26:01.457645 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 06:26:01.457664 | orchestrator | Thursday 09 April 2026 06:25:33 +0000 (0:00:01.986) 1:14:35.079 ******** 2026-04-09 06:26:01.457683 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:26:01.457702 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:26:01.457719 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:26:01.457737 | orchestrator | 2026-04-09 06:26:01.457755 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-04-09 06:26:01.457774 | orchestrator | Thursday 09 April 2026 06:25:34 +0000 (0:00:01.312) 1:14:36.391 ******** 2026-04-09 06:26:01.457792 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:26:01.457809 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:26:01.457843 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:26:01.457855 | orchestrator | 2026-04-09 06:26:01.457866 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-04-09 06:26:01.457877 | orchestrator | Thursday 09 April 2026 06:25:35 +0000 (0:00:01.374) 1:14:37.765 ******** 2026-04-09 06:26:01.457888 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:26:01.457930 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:26:01.457944 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:26:01.457955 | orchestrator | 2026-04-09 06:26:01.457966 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-04-09 06:26:01.457977 | orchestrator | Thursday 09 April 2026 06:25:37 +0000 (0:00:01.327) 1:14:39.093 ******** 2026-04-09 06:26:01.457988 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:26:01.457999 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:26:01.458010 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:26:01.458091 | orchestrator | 2026-04-09 06:26:01.458103 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-04-09 06:26:01.458114 | orchestrator | Thursday 09 April 2026 06:25:38 +0000 (0:00:01.398) 1:14:40.491 ******** 2026-04-09 06:26:01.458125 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:26:01.458136 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:26:01.458147 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:26:01.458157 | orchestrator | 2026-04-09 06:26:01.458168 | orchestrator | TASK [ceph-dashboard : Include configure_dashboard.yml] ************************ 2026-04-09 06:26:01.458179 | orchestrator | Thursday 09 April 2026 06:25:39 +0000 (0:00:01.346) 1:14:41.838 ******** 2026-04-09 06:26:01.458190 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:26:01.458201 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:26:01.458213 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:26:01.458223 | orchestrator | 2026-04-09 06:26:01.458234 | orchestrator | TASK [ceph-dashboard : Print dashboard URL] ************************************ 2026-04-09 06:26:01.458245 | orchestrator | Thursday 09 April 2026 06:25:41 +0000 (0:00:01.762) 1:14:43.600 ******** 2026-04-09 06:26:01.458256 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:26:01.458267 | orchestrator | 2026-04-09 06:26:01.458278 | orchestrator | PLAY [Switch any existing crush buckets to straw2] ***************************** 2026-04-09 06:26:01.458289 | orchestrator | 2026-04-09 06:26:01.458299 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 06:26:01.458319 | orchestrator | Thursday 09 April 2026 06:25:43 +0000 (0:00:01.641) 1:14:45.241 ******** 2026-04-09 06:26:01.458329 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:26:01.458340 | orchestrator | 2026-04-09 06:26:01.458349 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 06:26:01.458359 | orchestrator | Thursday 09 April 2026 06:25:44 +0000 (0:00:01.533) 1:14:46.775 ******** 2026-04-09 06:26:01.458369 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:26:01.458379 | orchestrator | 2026-04-09 06:26:01.458388 | orchestrator | TASK [Set_fact ceph_cmd] ******************************************************* 2026-04-09 06:26:01.458409 | orchestrator | Thursday 09 April 2026 06:25:46 +0000 (0:00:01.109) 1:14:47.885 ******** 2026-04-09 06:26:01.458419 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:26:01.458428 | orchestrator | 2026-04-09 06:26:01.458438 | orchestrator | TASK [Backup the crushmap] ***************************************************** 2026-04-09 06:26:01.458449 | orchestrator | Thursday 09 April 2026 06:25:47 +0000 (0:00:01.151) 1:14:49.036 ******** 2026-04-09 06:26:01.458458 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:26:01.458468 | orchestrator | 2026-04-09 06:26:01.458478 | orchestrator | TASK [Switch crush buckets to straw2] ****************************************** 2026-04-09 06:26:01.458488 | orchestrator | Thursday 09 April 2026 06:25:50 +0000 (0:00:02.983) 1:14:52.019 ******** 2026-04-09 06:26:01.458498 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:26:01.458507 | orchestrator | 2026-04-09 06:26:01.458517 | orchestrator | TASK [Remove crushmap backup] ************************************************** 2026-04-09 06:26:01.458526 | orchestrator | Thursday 09 April 2026 06:25:54 +0000 (0:00:04.033) 1:14:56.053 ******** 2026-04-09 06:26:01.458536 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:26:01.458546 | orchestrator | 2026-04-09 06:26:01.458556 | orchestrator | PLAY [Show ceph status] ******************************************************** 2026-04-09 06:26:01.458566 | orchestrator | 2026-04-09 06:26:01.458575 | orchestrator | TASK [Set_fact container_exec_cmd_status] ************************************** 2026-04-09 06:26:01.458585 | orchestrator | Thursday 09 April 2026 06:25:56 +0000 (0:00:02.226) 1:14:58.280 ******** 2026-04-09 06:26:01.458594 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:26:01.458604 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:26:01.458614 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:26:01.458624 | orchestrator | 2026-04-09 06:26:01.458633 | orchestrator | TASK [Show ceph status] ******************************************************** 2026-04-09 06:26:01.458643 | orchestrator | Thursday 09 April 2026 06:25:57 +0000 (0:00:01.509) 1:14:59.789 ******** 2026-04-09 06:26:01.458653 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:26:01.458662 | orchestrator | 2026-04-09 06:26:01.458672 | orchestrator | TASK [Show all daemons version] ************************************************ 2026-04-09 06:26:01.458682 | orchestrator | Thursday 09 April 2026 06:26:00 +0000 (0:00:02.310) 1:15:02.100 ******** 2026-04-09 06:26:01.458692 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:26:01.458701 | orchestrator | 2026-04-09 06:26:01.458711 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 06:26:01.458722 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 06:26:01.458733 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=76  rescued=0 ignored=0 2026-04-09 06:26:01.458745 | orchestrator | testbed-node-0 : ok=248  changed=19  unreachable=0 failed=0 skipped=369  rescued=0 ignored=0 2026-04-09 06:26:01.458755 | orchestrator | testbed-node-1 : ok=191  changed=14  unreachable=0 failed=0 skipped=343  rescued=0 ignored=0 2026-04-09 06:26:01.458773 | orchestrator | testbed-node-2 : ok=196  changed=14  unreachable=0 failed=0 skipped=344  rescued=0 ignored=0 2026-04-09 06:26:04.267468 | orchestrator | testbed-node-3 : ok=316  changed=21  unreachable=0 failed=0 skipped=355  rescued=0 ignored=0 2026-04-09 06:26:04.267567 | orchestrator | testbed-node-4 : ok=308  changed=16  unreachable=0 failed=0 skipped=352  rescued=0 ignored=0 2026-04-09 06:26:04.267582 | orchestrator | testbed-node-5 : ok=303  changed=17  unreachable=0 failed=0 skipped=337  rescued=0 ignored=0 2026-04-09 06:26:04.267596 | orchestrator | 2026-04-09 06:26:04.267609 | orchestrator | 2026-04-09 06:26:04.267620 | orchestrator | 2026-04-09 06:26:04.267632 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 06:26:04.267670 | orchestrator | Thursday 09 April 2026 06:26:03 +0000 (0:00:03.372) 1:15:05.472 ******** 2026-04-09 06:26:04.267682 | orchestrator | =============================================================================== 2026-04-09 06:26:04.267693 | orchestrator | Re-enable pg autoscale on pools ---------------------------------------- 76.64s 2026-04-09 06:26:04.267705 | orchestrator | Disable pg autoscale on pools ------------------------------------------ 75.51s 2026-04-09 06:26:04.267716 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 33.75s 2026-04-09 06:26:04.267727 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 33.09s 2026-04-09 06:26:04.267739 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.24s 2026-04-09 06:26:04.267750 | orchestrator | Gather and delegate facts ---------------------------------------------- 30.55s 2026-04-09 06:26:04.267761 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 29.48s 2026-04-09 06:26:04.267773 | orchestrator | Waiting for clean pgs... ----------------------------------------------- 24.81s 2026-04-09 06:26:04.267784 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 23.11s 2026-04-09 06:26:04.267811 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.87s 2026-04-09 06:26:04.267823 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 21.62s 2026-04-09 06:26:04.267834 | orchestrator | Stop ceph mgr ---------------------------------------------------------- 18.13s 2026-04-09 06:26:04.267844 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 16.91s 2026-04-09 06:26:04.267855 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 15.42s 2026-04-09 06:26:04.267866 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 14.03s 2026-04-09 06:26:04.267877 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.66s 2026-04-09 06:26:04.267888 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.38s 2026-04-09 06:26:04.267946 | orchestrator | Stop ceph osd ---------------------------------------------------------- 11.67s 2026-04-09 06:26:04.267957 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 11.47s 2026-04-09 06:26:04.267968 | orchestrator | Set cluster configs ---------------------------------------------------- 10.61s 2026-04-09 06:26:04.458538 | orchestrator | + osism apply cephclient 2026-04-09 06:26:05.782515 | orchestrator | 2026-04-09 06:26:05 | INFO  | Prepare task for execution of cephclient. 2026-04-09 06:26:05.849261 | orchestrator | 2026-04-09 06:26:05 | INFO  | Task 57b4c6a7-ff36-4e75-bbe0-f5fc0e2ec8d9 (cephclient) was prepared for execution. 2026-04-09 06:26:05.849357 | orchestrator | 2026-04-09 06:26:05 | INFO  | It takes a moment until task 57b4c6a7-ff36-4e75-bbe0-f5fc0e2ec8d9 (cephclient) has been started and output is visible here. 2026-04-09 06:26:33.658380 | orchestrator | 2026-04-09 06:26:33.658541 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-09 06:26:33.658560 | orchestrator | 2026-04-09 06:26:33.658573 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-09 06:26:33.658584 | orchestrator | Thursday 09 April 2026 06:26:11 +0000 (0:00:01.978) 0:00:01.978 ******** 2026-04-09 06:26:33.658596 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-09 06:26:33.658610 | orchestrator | 2026-04-09 06:26:33.658622 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-09 06:26:33.658632 | orchestrator | Thursday 09 April 2026 06:26:13 +0000 (0:00:01.828) 0:00:03.806 ******** 2026-04-09 06:26:33.658644 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-09 06:26:33.658656 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-09 06:26:33.658669 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-09 06:26:33.658710 | orchestrator | 2026-04-09 06:26:33.658722 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-09 06:26:33.658733 | orchestrator | Thursday 09 April 2026 06:26:16 +0000 (0:00:02.639) 0:00:06.445 ******** 2026-04-09 06:26:33.658744 | orchestrator | ok: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-09 06:26:33.658756 | orchestrator | 2026-04-09 06:26:33.658767 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-09 06:26:33.658778 | orchestrator | Thursday 09 April 2026 06:26:18 +0000 (0:00:02.051) 0:00:08.497 ******** 2026-04-09 06:26:33.658789 | orchestrator | ok: [testbed-manager] 2026-04-09 06:26:33.658800 | orchestrator | 2026-04-09 06:26:33.658811 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-09 06:26:33.658822 | orchestrator | Thursday 09 April 2026 06:26:20 +0000 (0:00:01.897) 0:00:10.395 ******** 2026-04-09 06:26:33.658858 | orchestrator | ok: [testbed-manager] 2026-04-09 06:26:33.658870 | orchestrator | 2026-04-09 06:26:33.658881 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-09 06:26:33.658892 | orchestrator | Thursday 09 April 2026 06:26:21 +0000 (0:00:01.882) 0:00:12.278 ******** 2026-04-09 06:26:33.658902 | orchestrator | ok: [testbed-manager] 2026-04-09 06:26:33.658913 | orchestrator | 2026-04-09 06:26:33.658924 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-09 06:26:33.658935 | orchestrator | Thursday 09 April 2026 06:26:24 +0000 (0:00:02.244) 0:00:14.522 ******** 2026-04-09 06:26:33.658946 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-09 06:26:33.658959 | orchestrator | ok: [testbed-manager] => (item=ceph-authtool) 2026-04-09 06:26:33.658970 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-09 06:26:33.658981 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-09 06:26:33.658992 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-09 06:26:33.659003 | orchestrator | 2026-04-09 06:26:33.659014 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-09 06:26:33.659025 | orchestrator | Thursday 09 April 2026 06:26:29 +0000 (0:00:05.005) 0:00:19.528 ******** 2026-04-09 06:26:33.659036 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-09 06:26:33.659047 | orchestrator | 2026-04-09 06:26:33.659058 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-09 06:26:33.659070 | orchestrator | Thursday 09 April 2026 06:26:30 +0000 (0:00:01.527) 0:00:21.056 ******** 2026-04-09 06:26:33.659081 | orchestrator | skipping: [testbed-manager] 2026-04-09 06:26:33.659092 | orchestrator | 2026-04-09 06:26:33.659103 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-09 06:26:33.659114 | orchestrator | Thursday 09 April 2026 06:26:31 +0000 (0:00:01.111) 0:00:22.167 ******** 2026-04-09 06:26:33.659125 | orchestrator | skipping: [testbed-manager] 2026-04-09 06:26:33.659136 | orchestrator | 2026-04-09 06:26:33.659148 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 06:26:33.659175 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 06:26:33.659188 | orchestrator | 2026-04-09 06:26:33.659199 | orchestrator | 2026-04-09 06:26:33.659209 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 06:26:33.659220 | orchestrator | Thursday 09 April 2026 06:26:33 +0000 (0:00:01.498) 0:00:23.665 ******** 2026-04-09 06:26:33.659231 | orchestrator | =============================================================================== 2026-04-09 06:26:33.659242 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 5.01s 2026-04-09 06:26:33.659253 | orchestrator | osism.services.cephclient : Create required directories ----------------- 2.64s 2026-04-09 06:26:33.659263 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------- 2.24s 2026-04-09 06:26:33.659274 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 2.05s 2026-04-09 06:26:33.659295 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.90s 2026-04-09 06:26:33.659307 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.88s 2026-04-09 06:26:33.659317 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 1.83s 2026-04-09 06:26:33.659328 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 1.53s 2026-04-09 06:26:33.659339 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 1.50s 2026-04-09 06:26:33.659350 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 1.11s 2026-04-09 06:26:33.856052 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-09 06:26:33.856199 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/300-openstack.sh 2026-04-09 06:26:33.863638 | orchestrator | + set -e 2026-04-09 06:26:33.863666 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 06:26:33.863680 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 06:26:33.863692 | orchestrator | ++ INTERACTIVE=false 2026-04-09 06:26:33.863704 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 06:26:33.863715 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 06:26:33.863933 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 06:26:33.863946 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 06:26:33.863957 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 06:26:33.863976 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-09 06:26:33.863987 | orchestrator | ++ CEPH_VERSION=reef 2026-04-09 06:26:33.863999 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 06:26:33.864010 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 06:26:33.864021 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-09 06:26:33.864032 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-09 06:26:33.864044 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-09 06:26:33.864055 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-09 06:26:33.864066 | orchestrator | ++ export ARA=false 2026-04-09 06:26:33.864077 | orchestrator | ++ ARA=false 2026-04-09 06:26:33.864088 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 06:26:33.864100 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 06:26:33.864111 | orchestrator | ++ export TEMPEST=false 2026-04-09 06:26:33.864122 | orchestrator | ++ TEMPEST=false 2026-04-09 06:26:33.864133 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 06:26:33.864143 | orchestrator | ++ IS_ZUUL=true 2026-04-09 06:26:33.864155 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 06:26:33.864166 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 06:26:33.864958 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 06:26:33.864980 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 06:26:33.864992 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 06:26:33.865003 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 06:26:33.865014 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 06:26:33.865025 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 06:26:33.865036 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 06:26:33.865047 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 06:26:33.865058 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-09 06:26:33.865069 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-09 06:26:33.865080 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-09 06:26:33.865536 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-09 06:26:33.871620 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-09 06:26:33.871675 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-09 06:26:33.871690 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-09 06:26:33.871704 | orchestrator | + osism migrate rabbitmq3to4 prepare 2026-04-09 06:26:42.980881 | orchestrator | 2026-04-09 06:26:42 | ERROR  | Unable to get ansible vault password 2026-04-09 06:26:42.980980 | orchestrator | 2026-04-09 06:26:42 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 06:26:42.980996 | orchestrator | 2026-04-09 06:26:42 | ERROR  | Dropping encrypted entries 2026-04-09 06:26:43.018697 | orchestrator | 2026-04-09 06:26:43 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-09 06:26:43.019723 | orchestrator | 2026-04-09 06:26:43 | INFO  | Kolla configuration check passed 2026-04-09 06:26:43.253004 | orchestrator | 2026-04-09 06:26:43 | INFO  | Created vhost 'openstack' with default_queue_type=quorum 2026-04-09 06:26:43.271171 | orchestrator | 2026-04-09 06:26:43 | INFO  | Set permissions for user 'openstack' on vhost 'openstack' 2026-04-09 06:26:43.554738 | orchestrator | + osism migrate rabbitmq3to4 list 2026-04-09 06:26:49.683501 | orchestrator | 2026-04-09 06:26:49 | ERROR  | Unable to get ansible vault password 2026-04-09 06:26:49.683607 | orchestrator | 2026-04-09 06:26:49 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 06:26:49.683623 | orchestrator | 2026-04-09 06:26:49 | ERROR  | Dropping encrypted entries 2026-04-09 06:26:49.720336 | orchestrator | 2026-04-09 06:26:49 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-09 06:26:49.868706 | orchestrator | 2026-04-09 06:26:49 | INFO  | Found 207 classic queue(s) in vhost '/': 2026-04-09 06:26:49.868974 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - alarm.all.sample (vhost: /, messages: 0) 2026-04-09 06:26:49.869131 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - alarming.sample (vhost: /, messages: 0) 2026-04-09 06:26:49.869153 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - barbican.workers (vhost: /, messages: 0) 2026-04-09 06:26:49.869172 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - barbican.workers.barbican.queue (vhost: /, messages: 0) 2026-04-09 06:26:49.869209 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - barbican.workers_fanout_450cce1ac9394a10b3fb873c58725b8a (vhost: /, messages: 0) 2026-04-09 06:26:49.869231 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - barbican.workers_fanout_9f06d98685624c69b3ac95e6a3ff9bac (vhost: /, messages: 0) 2026-04-09 06:26:49.869251 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - barbican.workers_fanout_cf4ef050da0a4cd39d96b1ee21f2894c (vhost: /, messages: 0) 2026-04-09 06:26:49.869269 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - barbican_notifications.info (vhost: /, messages: 0) 2026-04-09 06:26:49.869416 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - central (vhost: /, messages: 0) 2026-04-09 06:26:49.869694 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - central.testbed-node-0 (vhost: /, messages: 0) 2026-04-09 06:26:49.869726 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - central.testbed-node-1 (vhost: /, messages: 0) 2026-04-09 06:26:49.869745 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - central.testbed-node-2 (vhost: /, messages: 0) 2026-04-09 06:26:49.869945 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - central_fanout_245d90ee3aac47fb88e60e273cf24ccc (vhost: /, messages: 0) 2026-04-09 06:26:49.869979 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - central_fanout_51f5f64e6cd247779ea68d562f1c6b4e (vhost: /, messages: 0) 2026-04-09 06:26:49.870246 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - central_fanout_60497334245b4f479d9bbd451602334b (vhost: /, messages: 0) 2026-04-09 06:26:49.870466 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - central_fanout_66d2e85d806846d9b9a772534d26a3fa (vhost: /, messages: 0) 2026-04-09 06:26:49.870488 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - central_fanout_9fa98b2c19f44c4093942735d6476a52 (vhost: /, messages: 0) 2026-04-09 06:26:49.870501 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - central_fanout_b9a0a81986d04c31b7105e6d6a098f98 (vhost: /, messages: 0) 2026-04-09 06:26:49.870650 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-backup (vhost: /, messages: 0) 2026-04-09 06:26:49.870669 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-backup.testbed-node-0 (vhost: /, messages: 0) 2026-04-09 06:26:49.870793 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-backup.testbed-node-1 (vhost: /, messages: 0) 2026-04-09 06:26:49.871001 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-backup.testbed-node-2 (vhost: /, messages: 0) 2026-04-09 06:26:49.871247 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-backup_fanout_645504822c78416bbbb9fa268be79d26 (vhost: /, messages: 0) 2026-04-09 06:26:49.871265 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-backup_fanout_b727482a095e481db7ad0497a1577cd0 (vhost: /, messages: 0) 2026-04-09 06:26:49.871419 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-backup_fanout_bf178fb3e3de4415b34d59da4b501c4f (vhost: /, messages: 0) 2026-04-09 06:26:49.872273 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-scheduler (vhost: /, messages: 0) 2026-04-09 06:26:49.872293 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-04-09 06:26:49.872310 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-04-09 06:26:49.872433 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-04-09 06:26:49.872555 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-scheduler_fanout_052bb76fb345429d8ce5ffc403448643 (vhost: /, messages: 0) 2026-04-09 06:26:49.872578 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-scheduler_fanout_537d41a2f7c04f758550f200c44933b4 (vhost: /, messages: 0) 2026-04-09 06:26:49.872738 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-scheduler_fanout_73fcf6565095481595ba75e5405b3d66 (vhost: /, messages: 0) 2026-04-09 06:26:49.872761 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-volume (vhost: /, messages: 0) 2026-04-09 06:26:49.873227 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: /, messages: 0) 2026-04-09 06:26:49.873258 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: /, messages: 0) 2026-04-09 06:26:49.873282 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_52debc533429434abeb00a456b20d3ca (vhost: /, messages: 0) 2026-04-09 06:26:49.873300 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: /, messages: 0) 2026-04-09 06:26:49.873424 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: /, messages: 0) 2026-04-09 06:26:49.873646 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_1243b4b13ef04c42a29392bae846d470 (vhost: /, messages: 0) 2026-04-09 06:26:49.873665 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: /, messages: 0) 2026-04-09 06:26:49.873675 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: /, messages: 0) 2026-04-09 06:26:49.873832 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_bf60cdf3770444b994a2ad4275322ba1 (vhost: /, messages: 0) 2026-04-09 06:26:49.873849 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-volume_fanout_089843ff50994fd28efb726d4c573522 (vhost: /, messages: 0) 2026-04-09 06:26:49.873979 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-volume_fanout_661ea2c2ef2548da968e132ab1e8cdb0 (vhost: /, messages: 0) 2026-04-09 06:26:49.873995 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - cinder-volume_fanout_be7d0969470248b49961079212ba568a (vhost: /, messages: 0) 2026-04-09 06:26:49.874404 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - compute (vhost: /, messages: 0) 2026-04-09 06:26:49.874426 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - compute.testbed-node-3 (vhost: /, messages: 0) 2026-04-09 06:26:49.874437 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - compute.testbed-node-4 (vhost: /, messages: 0) 2026-04-09 06:26:49.874451 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - compute.testbed-node-5 (vhost: /, messages: 0) 2026-04-09 06:26:49.874683 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - compute_fanout_855b4f1a3b7d4c8091e8f703a68909b0 (vhost: /, messages: 0) 2026-04-09 06:26:49.874702 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - compute_fanout_b85ab53b7d7c4893aa67a32a647d57c8 (vhost: /, messages: 0) 2026-04-09 06:26:49.874895 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - compute_fanout_ca17ea1c60c24cfdb953934def7eea03 (vhost: /, messages: 0) 2026-04-09 06:26:49.874914 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - conductor (vhost: /, messages: 0) 2026-04-09 06:26:49.875228 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - conductor.testbed-node-0 (vhost: /, messages: 0) 2026-04-09 06:26:49.875244 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - conductor.testbed-node-1 (vhost: /, messages: 0) 2026-04-09 06:26:49.875253 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - conductor.testbed-node-2 (vhost: /, messages: 0) 2026-04-09 06:26:49.875261 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - conductor_fanout_0f54e11caa32448bb0c2ae259ddc526f (vhost: /, messages: 0) 2026-04-09 06:26:49.875426 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - conductor_fanout_35f7332032de49e5af9941c0f399c879 (vhost: /, messages: 0) 2026-04-09 06:26:49.875492 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - conductor_fanout_a29c577640ff47b08818cd6f57cb16a7 (vhost: /, messages: 0) 2026-04-09 06:26:49.875647 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - conductor_fanout_a355dc39a9d540819161e57dde5b1aa4 (vhost: /, messages: 0) 2026-04-09 06:26:49.875746 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - conductor_fanout_b5b2d3ecd3794315a009d6bb8e779631 (vhost: /, messages: 0) 2026-04-09 06:26:49.876214 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - conductor_fanout_d3e38ba228dc4c11ad5acc3ca95392f7 (vhost: /, messages: 0) 2026-04-09 06:26:49.876229 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - event.sample (vhost: /, messages: 10) 2026-04-09 06:26:49.876238 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-04-09 06:26:49.876246 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - magnum-conductor.cykcdelam52s (vhost: /, messages: 0) 2026-04-09 06:26:49.876446 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - magnum-conductor.hbzgkxncdqav (vhost: /, messages: 0) 2026-04-09 06:26:49.876462 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - magnum-conductor.mxbpe6z4rg4r (vhost: /, messages: 0) 2026-04-09 06:26:49.876725 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - magnum-conductor_fanout_0c1eb08ade4b4112811cd116bdd6b4c2 (vhost: /, messages: 0) 2026-04-09 06:26:49.876738 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - magnum-conductor_fanout_3aca7ac949d248549306a81b8e7fed9b (vhost: /, messages: 0) 2026-04-09 06:26:49.876745 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - magnum-conductor_fanout_532b16f5be3c4962b53ddf24cfa6d53a (vhost: /, messages: 0) 2026-04-09 06:26:49.876983 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - magnum-conductor_fanout_55d3ec5dcdb14ef0b29dbe13439f2cf7 (vhost: /, messages: 0) 2026-04-09 06:26:49.876996 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - magnum-conductor_fanout_57ba0816dc4e47c4a46e9bc9d8af3ce6 (vhost: /, messages: 0) 2026-04-09 06:26:49.877012 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - magnum-conductor_fanout_73cd068eca4a46d68c5e268ecfbceaed (vhost: /, messages: 0) 2026-04-09 06:26:49.877266 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - magnum-conductor_fanout_7e57901f145d443bb977ac07fcc7ff75 (vhost: /, messages: 0) 2026-04-09 06:26:49.877278 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - magnum-conductor_fanout_ad329c41f85d47b481d20dd1d3221641 (vhost: /, messages: 0) 2026-04-09 06:26:49.877285 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - magnum-conductor_fanout_c4d60ac9a043435f84fe9f5e8e3fbe0b (vhost: /, messages: 0) 2026-04-09 06:26:49.877483 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - manila-data (vhost: /, messages: 0) 2026-04-09 06:26:49.877686 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - manila-data.testbed-node-0 (vhost: /, messages: 0) 2026-04-09 06:26:49.877700 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - manila-data.testbed-node-1 (vhost: /, messages: 0) 2026-04-09 06:26:49.877707 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - manila-data.testbed-node-2 (vhost: /, messages: 0) 2026-04-09 06:26:49.877911 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - manila-data_fanout_4e950e6a826848c3923dd7771db0aeba (vhost: /, messages: 0) 2026-04-09 06:26:49.877925 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - manila-data_fanout_7586c31119294d6e8ad3d4bd5cc1a86f (vhost: /, messages: 0) 2026-04-09 06:26:49.877932 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - manila-data_fanout_7ee53fee2dce4381b0b10edb0ebe8531 (vhost: /, messages: 0) 2026-04-09 06:26:49.878060 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - manila-scheduler (vhost: /, messages: 0) 2026-04-09 06:26:49.878421 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-04-09 06:26:49.878442 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-04-09 06:26:49.878450 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-04-09 06:26:49.878457 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - manila-scheduler_fanout_09ac1a380ca14d669e023b8b14bf4455 (vhost: /, messages: 0) 2026-04-09 06:26:49.878515 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - manila-scheduler_fanout_18c5a1e7629640c4975654edc0cad3d2 (vhost: /, messages: 0) 2026-04-09 06:26:49.878866 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - manila-scheduler_fanout_d50504182c764c768235d2ce73b6168d (vhost: /, messages: 0) 2026-04-09 06:26:49.878964 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - manila-share (vhost: /, messages: 0) 2026-04-09 06:26:49.878975 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: /, messages: 0) 2026-04-09 06:26:49.878990 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: /, messages: 0) 2026-04-09 06:26:49.879001 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: /, messages: 0) 2026-04-09 06:26:49.879295 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - manila-share_fanout_2c8f381f8d39447bb5cc9f0fe677e59a (vhost: /, messages: 0) 2026-04-09 06:26:49.879320 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - manila-share_fanout_49b4befada5445c1a2b47ab1fb66d340 (vhost: /, messages: 0) 2026-04-09 06:26:49.879408 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - manila-share_fanout_4ce3ecb188ff4840bcbbd1bcea6e7843 (vhost: /, messages: 0) 2026-04-09 06:26:49.879432 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - notifications.audit (vhost: /, messages: 0) 2026-04-09 06:26:49.879843 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - notifications.critical (vhost: /, messages: 0) 2026-04-09 06:26:49.879870 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - notifications.debug (vhost: /, messages: 0) 2026-04-09 06:26:49.879880 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - notifications.error (vhost: /, messages: 0) 2026-04-09 06:26:49.879889 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - notifications.info (vhost: /, messages: 0) 2026-04-09 06:26:49.879898 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - notifications.sample (vhost: /, messages: 0) 2026-04-09 06:26:49.880144 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - notifications.warn (vhost: /, messages: 0) 2026-04-09 06:26:49.880163 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - octavia_provisioning_v2 (vhost: /, messages: 0) 2026-04-09 06:26:49.880169 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: /, messages: 0) 2026-04-09 06:26:49.880176 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: /, messages: 0) 2026-04-09 06:26:49.880384 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: /, messages: 0) 2026-04-09 06:26:49.880402 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - octavia_provisioning_v2_fanout_65358c402ffa41d0863bd8cbdb3e9387 (vhost: /, messages: 0) 2026-04-09 06:26:49.880415 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - octavia_provisioning_v2_fanout_b52e9881060a453d9d39adff3facb872 (vhost: /, messages: 0) 2026-04-09 06:26:49.880539 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - octavia_provisioning_v2_fanout_c85a5b68848c452a8862aaff6e49290e (vhost: /, messages: 0) 2026-04-09 06:26:49.880551 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - producer (vhost: /, messages: 0) 2026-04-09 06:26:49.880714 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - producer.testbed-node-0 (vhost: /, messages: 0) 2026-04-09 06:26:49.880725 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - producer.testbed-node-1 (vhost: /, messages: 0) 2026-04-09 06:26:49.880732 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - producer.testbed-node-2 (vhost: /, messages: 0) 2026-04-09 06:26:49.880941 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - producer_fanout_16b170b4bcbd4fcabea50df9186efcdd (vhost: /, messages: 0) 2026-04-09 06:26:49.880954 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - producer_fanout_74f3ddbe9efe4f9cb0d81c6814785534 (vhost: /, messages: 0) 2026-04-09 06:26:49.881279 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - producer_fanout_7a52854e829944629d4f518dbb2a60c1 (vhost: /, messages: 0) 2026-04-09 06:26:49.881291 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - producer_fanout_95309e4829194bc38663576bdd4d211f (vhost: /, messages: 0) 2026-04-09 06:26:49.881406 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - producer_fanout_be535d34c37f49b6b2bcbc39012e1ee5 (vhost: /, messages: 0) 2026-04-09 06:26:49.881418 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - producer_fanout_ef1d5347034040e2929196cb401e4bd3 (vhost: /, messages: 0) 2026-04-09 06:26:49.881424 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-plugin (vhost: /, messages: 0) 2026-04-09 06:26:49.881565 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-04-09 06:26:49.881839 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-04-09 06:26:49.881851 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-04-09 06:26:49.881926 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-plugin_fanout_3c3ac3329b7d42d384ca4d4745b7008b (vhost: /, messages: 0) 2026-04-09 06:26:49.881948 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-plugin_fanout_5c083afb9dd8421dbab5b7c26018e1e9 (vhost: /, messages: 0) 2026-04-09 06:26:49.882078 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-plugin_fanout_68c5daf3db31445b84cd22fc133e1263 (vhost: /, messages: 0) 2026-04-09 06:26:49.882090 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-plugin_fanout_b1dda493a32a4adcbe972cbd99d3f752 (vhost: /, messages: 0) 2026-04-09 06:26:49.882311 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-plugin_fanout_db249d45f4114daea5d09b63764db3b0 (vhost: /, messages: 0) 2026-04-09 06:26:49.882330 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-plugin_fanout_e78b1e0526e04eaface51984f4e10f0b (vhost: /, messages: 0) 2026-04-09 06:26:49.882337 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-plugin_fanout_fb9c7f191d0940f481d6c503b5353e30 (vhost: /, messages: 0) 2026-04-09 06:26:49.882523 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-plugin_fanout_fe38c00ef29b415690aff7929a100199 (vhost: /, messages: 0) 2026-04-09 06:26:49.882534 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-reports-plugin (vhost: /, messages: 0) 2026-04-09 06:26:49.882706 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-04-09 06:26:49.882716 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-04-09 06:26:49.882723 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-04-09 06:26:49.883181 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-reports-plugin_fanout_0e71ac445dab44db874a371ec5ae433c (vhost: /, messages: 0) 2026-04-09 06:26:49.883248 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-reports-plugin_fanout_0f14a9f14ff34aada3ce9fed02b1fc2b (vhost: /, messages: 0) 2026-04-09 06:26:49.883260 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-reports-plugin_fanout_134b566c664d4953907ec11dfb8a4558 (vhost: /, messages: 0) 2026-04-09 06:26:49.883268 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-reports-plugin_fanout_3095b669fc1948f2b928c0acebda6ac0 (vhost: /, messages: 0) 2026-04-09 06:26:49.883274 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-reports-plugin_fanout_30dfa2a601054752889f564324118cb9 (vhost: /, messages: 0) 2026-04-09 06:26:49.883373 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-reports-plugin_fanout_3c1beb4fd1674d228a84e2a080cef672 (vhost: /, messages: 0) 2026-04-09 06:26:49.883384 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-reports-plugin_fanout_45e6a81660534926881c6e9ab0d8ea87 (vhost: /, messages: 0) 2026-04-09 06:26:49.883391 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-reports-plugin_fanout_4955aadc1ff6437b9920e2a11a26d233 (vhost: /, messages: 0) 2026-04-09 06:26:49.883452 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-reports-plugin_fanout_4a7156ec325647cda9253d39be30b961 (vhost: /, messages: 0) 2026-04-09 06:26:49.883845 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-reports-plugin_fanout_5599521d6973429699628a830c4e68fd (vhost: /, messages: 0) 2026-04-09 06:26:49.883928 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-reports-plugin_fanout_6575251d04924f92a26ac8a3eca42bf0 (vhost: /, messages: 0) 2026-04-09 06:26:49.883937 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-reports-plugin_fanout_72bd1e3e371840d9abb20d1e70961b2a (vhost: /, messages: 0) 2026-04-09 06:26:49.883944 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-reports-plugin_fanout_7b292f341a2842b7bd5d32c0d11946ff (vhost: /, messages: 0) 2026-04-09 06:26:49.883955 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-reports-plugin_fanout_7c6c597163e041c3a0d779ec1bcfbfbf (vhost: /, messages: 0) 2026-04-09 06:26:49.883977 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-reports-plugin_fanout_9a70990d3be54741b082e4e481477677 (vhost: /, messages: 0) 2026-04-09 06:26:49.884047 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-reports-plugin_fanout_9e944c5fe79449bea699998c045a4fa4 (vhost: /, messages: 0) 2026-04-09 06:26:49.884133 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-reports-plugin_fanout_ca3736254af844a7975ae17e1db1dcc8 (vhost: /, messages: 0) 2026-04-09 06:26:49.884265 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-reports-plugin_fanout_e05c7d5eb11c423b9e0679e218898a1f (vhost: /, messages: 0) 2026-04-09 06:26:49.884370 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-server-resource-versions (vhost: /, messages: 0) 2026-04-09 06:26:49.884561 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: /, messages: 0) 2026-04-09 06:26:49.884579 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: /, messages: 0) 2026-04-09 06:26:49.884643 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: /, messages: 0) 2026-04-09 06:26:49.884818 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-server-resource-versions_fanout_1cc7a071a8bc4e638216e241191cd130 (vhost: /, messages: 0) 2026-04-09 06:26:49.884898 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-server-resource-versions_fanout_610159b6119f4a98bd83bcddebb0fd63 (vhost: /, messages: 0) 2026-04-09 06:26:49.884909 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-server-resource-versions_fanout_6d65c20e45304703b80eaab889933345 (vhost: /, messages: 0) 2026-04-09 06:26:49.885373 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-server-resource-versions_fanout_795c048358484403b8af8ef09dca16ff (vhost: /, messages: 0) 2026-04-09 06:26:49.885451 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-server-resource-versions_fanout_9112422cdb3d48138cf1e3a365f044cc (vhost: /, messages: 0) 2026-04-09 06:26:49.885466 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-server-resource-versions_fanout_936ea80c66334342b85fdd98c59d738e (vhost: /, messages: 0) 2026-04-09 06:26:49.885472 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-server-resource-versions_fanout_996a1fa4a84a43589b48cfa9cbd6443e (vhost: /, messages: 0) 2026-04-09 06:26:49.885478 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-server-resource-versions_fanout_e50349d7856440779d8ab344be36ff5b (vhost: /, messages: 0) 2026-04-09 06:26:49.885531 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - q-server-resource-versions_fanout_f62a7a69c3c64bd8897f3c3beab58541 (vhost: /, messages: 0) 2026-04-09 06:26:49.885541 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - reply_00d8c011bbdf4bbe9e474c97726d8d52 (vhost: /, messages: 0) 2026-04-09 06:26:49.885548 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - reply_0733286356e0483d8de9c3a404284dfb (vhost: /, messages: 0) 2026-04-09 06:26:49.886650 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - reply_175b9f271ceb436ea3b2e921de89e81a (vhost: /, messages: 0) 2026-04-09 06:26:49.886685 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - reply_200f23232ea444e0b16b1a55777ed79d (vhost: /, messages: 0) 2026-04-09 06:26:49.886736 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - reply_310560fb822b4d29927e61d924cc1507 (vhost: /, messages: 0) 2026-04-09 06:26:49.886744 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - reply_3212cdd89a9140258f2a8807e54f6a3c (vhost: /, messages: 0) 2026-04-09 06:26:49.886751 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - reply_37630e9dcb4544e8bcae230d7b066792 (vhost: /, messages: 0) 2026-04-09 06:26:49.886769 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - reply_405e5ed6c0b54ba099d55816ed55667a (vhost: /, messages: 0) 2026-04-09 06:26:49.886775 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - reply_4e77b917b2fd41ad8d9e40840045e018 (vhost: /, messages: 0) 2026-04-09 06:26:49.886782 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - reply_537b38c8bd6446a78778bd9e5342eec9 (vhost: /, messages: 0) 2026-04-09 06:26:49.886794 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - reply_5be801f8a5a141e5a80c44920fd6d9fb (vhost: /, messages: 0) 2026-04-09 06:26:49.886930 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - reply_6ea634d9564c4fa4a824a7c135131458 (vhost: /, messages: 0) 2026-04-09 06:26:49.886941 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - reply_8578701960c14df283b116bc5001688d (vhost: /, messages: 0) 2026-04-09 06:26:49.886952 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - reply_a05f08912d3746aab3a95cc5d0d37c46 (vhost: /, messages: 0) 2026-04-09 06:26:49.886959 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - reply_aa7f0cfbc29a4ed6b6ece44e4a4f8089 (vhost: /, messages: 0) 2026-04-09 06:26:49.887446 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - reply_ca18b534872b4b45805eb87df18237e2 (vhost: /, messages: 0) 2026-04-09 06:26:49.887474 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - reply_ce976773bf8a43589f7d025bc22bb31b (vhost: /, messages: 0) 2026-04-09 06:26:49.887526 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - reply_de0203f45a3844b7a98c2b5966ce90d4 (vhost: /, messages: 0) 2026-04-09 06:26:49.887538 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - reply_e172a8be12184f5082687291ea50369e (vhost: /, messages: 0) 2026-04-09 06:26:49.887719 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - scheduler (vhost: /, messages: 0) 2026-04-09 06:26:49.887775 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-04-09 06:26:49.887786 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-04-09 06:26:49.887905 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-04-09 06:26:49.887917 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - scheduler_fanout_687bb20418c34bb4b446a1d380eb0013 (vhost: /, messages: 0) 2026-04-09 06:26:49.888138 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - scheduler_fanout_6c511a2948d547ecbbc3677be89c9e10 (vhost: /, messages: 0) 2026-04-09 06:26:49.888204 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - scheduler_fanout_bb493e9fee354afbad3dcb1cd483d923 (vhost: /, messages: 0) 2026-04-09 06:26:49.888290 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - scheduler_fanout_c2b5061226fa4dc89aece325e2f16a43 (vhost: /, messages: 0) 2026-04-09 06:26:49.888364 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - scheduler_fanout_e0259c96692b4988923386c74656d334 (vhost: /, messages: 0) 2026-04-09 06:26:49.888662 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - scheduler_fanout_f7bca3eeed1641c2b280f6d22594591f (vhost: /, messages: 0) 2026-04-09 06:26:49.889713 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - worker (vhost: /, messages: 0) 2026-04-09 06:26:49.889745 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - worker.testbed-node-0 (vhost: /, messages: 0) 2026-04-09 06:26:49.889753 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - worker.testbed-node-1 (vhost: /, messages: 0) 2026-04-09 06:26:49.889759 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - worker.testbed-node-2 (vhost: /, messages: 0) 2026-04-09 06:26:49.889766 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - worker_fanout_3ec88b44f21247e0aebe5bd8e331db9b (vhost: /, messages: 0) 2026-04-09 06:26:49.889785 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - worker_fanout_6167decbe5bf42108b370fb0a6e950c1 (vhost: /, messages: 0) 2026-04-09 06:26:49.889792 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - worker_fanout_6a3f667e83e040aa8d47e26069bf9aa6 (vhost: /, messages: 0) 2026-04-09 06:26:49.889829 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - worker_fanout_bf77586457d04dd782e789ffdb87245f (vhost: /, messages: 0) 2026-04-09 06:26:49.889837 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - worker_fanout_e41f68f4a07c471a84af1b4767878bcf (vhost: /, messages: 0) 2026-04-09 06:26:49.889843 | orchestrator | 2026-04-09 06:26:49 | INFO  |  - worker_fanout_fc14f61082574433b0efceb1c87ae4ae (vhost: /, messages: 0) 2026-04-09 06:26:50.167344 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-04-09 06:26:56.428354 | orchestrator | 2026-04-09 06:26:56 | ERROR  | Unable to get ansible vault password 2026-04-09 06:26:56.428463 | orchestrator | 2026-04-09 06:26:56 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 06:26:56.428481 | orchestrator | 2026-04-09 06:26:56 | ERROR  | Dropping encrypted entries 2026-04-09 06:26:56.462128 | orchestrator | 2026-04-09 06:26:56 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-09 06:26:56.491922 | orchestrator | 2026-04-09 06:26:56 | INFO  | Found 46 exchange(s) in vhost '/': 2026-04-09 06:26:56.492005 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - aodh (type: topic, transient) 2026-04-09 06:26:56.492043 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - barbican.workers_fanout (type: fanout, transient) 2026-04-09 06:26:56.492059 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - ceilometer (type: topic, transient) 2026-04-09 06:26:56.492071 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - central_fanout (type: fanout, transient) 2026-04-09 06:26:56.492164 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - cinder (type: topic, transient) 2026-04-09 06:26:56.492179 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - cinder-backup_fanout (type: fanout, transient) 2026-04-09 06:26:56.492191 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - cinder-scheduler_fanout (type: fanout, transient) 2026-04-09 06:26:56.492203 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout (type: fanout, transient) 2026-04-09 06:26:56.492216 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout (type: fanout, transient) 2026-04-09 06:26:56.492228 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout (type: fanout, transient) 2026-04-09 06:26:56.492252 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - cinder-volume_fanout (type: fanout, transient) 2026-04-09 06:26:56.492263 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - compute_fanout (type: fanout, transient) 2026-04-09 06:26:56.492275 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - conductor_fanout (type: fanout, transient) 2026-04-09 06:26:56.492353 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - designate (type: topic, transient) 2026-04-09 06:26:56.492389 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - dns (type: topic, transient) 2026-04-09 06:26:56.492402 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - glance (type: topic, transient) 2026-04-09 06:26:56.492419 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - heat (type: topic, transient) 2026-04-09 06:26:56.492431 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - ironic (type: topic, transient) 2026-04-09 06:26:56.492469 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - keystone (type: topic, transient) 2026-04-09 06:26:56.492977 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - l3_agent_fanout (type: fanout, transient) 2026-04-09 06:26:56.493059 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - magnum (type: topic, transient) 2026-04-09 06:26:56.493072 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - magnum-conductor_fanout (type: fanout, transient) 2026-04-09 06:26:56.493162 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - manila-data_fanout (type: fanout, transient) 2026-04-09 06:26:56.493172 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - manila-scheduler_fanout (type: fanout, transient) 2026-04-09 06:26:56.493192 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - manila-share_fanout (type: fanout, transient) 2026-04-09 06:26:56.493201 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - neutron (type: topic, transient) 2026-04-09 06:26:56.493387 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - neutron-vo-Network-1.1_fanout (type: fanout, transient) 2026-04-09 06:26:56.493590 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - neutron-vo-Port-1.10_fanout (type: fanout, transient) 2026-04-09 06:26:56.493608 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - neutron-vo-SecurityGroup-1.6_fanout (type: fanout, transient) 2026-04-09 06:26:56.493618 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - neutron-vo-SecurityGroupRule-1.3_fanout (type: fanout, transient) 2026-04-09 06:26:56.493627 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - neutron-vo-Subnet-1.2_fanout (type: fanout, transient) 2026-04-09 06:26:56.493880 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - nova (type: topic, transient) 2026-04-09 06:26:56.493897 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - octavia (type: topic, transient) 2026-04-09 06:26:56.493907 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - octavia_provisioning_v2_fanout (type: fanout, transient) 2026-04-09 06:26:56.494242 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - openstack (type: topic, transient) 2026-04-09 06:26:56.494261 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - producer_fanout (type: fanout, transient) 2026-04-09 06:26:56.494274 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - q-agent-notifier-port-update_fanout (type: fanout, transient) 2026-04-09 06:26:56.494284 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - q-agent-notifier-security_group-update_fanout (type: fanout, transient) 2026-04-09 06:26:56.494472 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - q-plugin_fanout (type: fanout, transient) 2026-04-09 06:26:56.495014 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - q-reports-plugin_fanout (type: fanout, transient) 2026-04-09 06:26:56.495093 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - q-server-resource-versions_fanout (type: fanout, transient) 2026-04-09 06:26:56.495110 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - scheduler_fanout (type: fanout, transient) 2026-04-09 06:26:56.495121 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - swift (type: topic, transient) 2026-04-09 06:26:56.495130 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - trove (type: topic, transient) 2026-04-09 06:26:56.495139 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - worker_fanout (type: fanout, transient) 2026-04-09 06:26:56.495148 | orchestrator | 2026-04-09 06:26:56 | INFO  |  - zaqar (type: topic, transient) 2026-04-09 06:26:56.756537 | orchestrator | + osism apply -a upgrade keystone 2026-04-09 06:26:58.098617 | orchestrator | 2026-04-09 06:26:58 | INFO  | Prepare task for execution of keystone. 2026-04-09 06:26:58.164258 | orchestrator | 2026-04-09 06:26:58 | INFO  | Task a4529562-85a2-4a13-b721-a29847518039 (keystone) was prepared for execution. 2026-04-09 06:26:58.164379 | orchestrator | 2026-04-09 06:26:58 | INFO  | It takes a moment until task a4529562-85a2-4a13-b721-a29847518039 (keystone) has been started and output is visible here. 2026-04-09 06:27:07.725829 | orchestrator | 2026-04-09 06:27:07.725926 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 06:27:07.725937 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-09 06:27:07.725947 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-09 06:27:07.725977 | orchestrator | 2026-04-09 06:27:07.725985 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 06:27:07.725992 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-09 06:27:07.725999 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-09 06:27:07.726052 | orchestrator | Thursday 09 April 2026 06:27:02 +0000 (0:00:01.196) 0:00:01.196 ******** 2026-04-09 06:27:07.726060 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:27:07.726068 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:27:07.726075 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:27:07.726083 | orchestrator | 2026-04-09 06:27:07.726090 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 06:27:07.726097 | orchestrator | Thursday 09 April 2026 06:27:03 +0000 (0:00:00.853) 0:00:02.049 ******** 2026-04-09 06:27:07.726105 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-09 06:27:07.726113 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-09 06:27:07.726120 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-09 06:27:07.726127 | orchestrator | 2026-04-09 06:27:07.726134 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-09 06:27:07.726141 | orchestrator | 2026-04-09 06:27:07.726148 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-09 06:27:07.726155 | orchestrator | Thursday 09 April 2026 06:27:04 +0000 (0:00:00.716) 0:00:02.766 ******** 2026-04-09 06:27:07.726163 | orchestrator | included: /ansible/roles/keystone/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 06:27:07.726171 | orchestrator | 2026-04-09 06:27:07.726178 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-09 06:27:07.726185 | orchestrator | Thursday 09 April 2026 06:27:05 +0000 (0:00:01.164) 0:00:03.930 ******** 2026-04-09 06:27:07.726197 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 06:27:07.726208 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 06:27:07.726251 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 06:27:07.726261 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 06:27:07.726270 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 06:27:07.726277 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 06:27:07.726293 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 06:27:07.726301 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 06:27:07.726318 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 06:27:13.683203 | orchestrator | 2026-04-09 06:27:13.683312 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-09 06:27:13.683330 | orchestrator | Thursday 09 April 2026 06:27:07 +0000 (0:00:02.233) 0:00:06.164 ******** 2026-04-09 06:27:13.683343 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:27:13.683356 | orchestrator | 2026-04-09 06:27:13.683369 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-09 06:27:13.683380 | orchestrator | Thursday 09 April 2026 06:27:08 +0000 (0:00:00.190) 0:00:06.355 ******** 2026-04-09 06:27:13.683392 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:27:13.683404 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:27:13.683415 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:27:13.683426 | orchestrator | 2026-04-09 06:27:13.683437 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-09 06:27:13.683448 | orchestrator | Thursday 09 April 2026 06:27:08 +0000 (0:00:00.368) 0:00:06.723 ******** 2026-04-09 06:27:13.683460 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 06:27:13.683471 | orchestrator | 2026-04-09 06:27:13.683482 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-09 06:27:13.683493 | orchestrator | Thursday 09 April 2026 06:27:09 +0000 (0:00:01.193) 0:00:07.917 ******** 2026-04-09 06:27:13.683504 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 06:27:13.683516 | orchestrator | 2026-04-09 06:27:13.683527 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-09 06:27:13.683538 | orchestrator | Thursday 09 April 2026 06:27:10 +0000 (0:00:01.115) 0:00:09.032 ******** 2026-04-09 06:27:13.683552 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 06:27:13.683592 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 06:27:13.683650 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 06:27:13.683675 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 06:27:13.683698 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 06:27:13.683732 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 06:27:13.683747 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 06:27:13.683789 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 06:27:13.683809 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 06:27:13.683823 | orchestrator | 2026-04-09 06:27:13.683845 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-09 06:27:15.056975 | orchestrator | Thursday 09 April 2026 06:27:13 +0000 (0:00:02.992) 0:00:12.025 ******** 2026-04-09 06:27:15.057082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 06:27:15.057130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 06:27:15.057145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 06:27:15.057157 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:27:15.057172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 06:27:15.057216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 06:27:15.057230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 06:27:15.057242 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:27:15.057254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 06:27:15.057274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 06:27:15.057286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 06:27:15.057297 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:27:15.057309 | orchestrator | 2026-04-09 06:27:15.057321 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-09 06:27:15.057333 | orchestrator | Thursday 09 April 2026 06:27:14 +0000 (0:00:01.060) 0:00:13.086 ******** 2026-04-09 06:27:15.057358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 06:27:16.892451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 06:27:16.892545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 06:27:16.892554 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:27:16.892562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 06:27:16.892568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 06:27:16.892583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 06:27:16.892588 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:27:16.892606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 06:27:16.892615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 06:27:16.892620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 06:27:16.892626 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:27:16.892630 | orchestrator | 2026-04-09 06:27:16.892636 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-09 06:27:16.892642 | orchestrator | Thursday 09 April 2026 06:27:15 +0000 (0:00:00.830) 0:00:13.916 ******** 2026-04-09 06:27:16.892647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 06:27:16.892660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 06:27:21.812503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 06:27:21.812619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 06:27:21.812637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 06:27:21.812650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 06:27:21.812679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 06:27:21.812730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 06:27:21.812797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 06:27:21.812812 | orchestrator | 2026-04-09 06:27:21.812825 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-09 06:27:21.812837 | orchestrator | Thursday 09 April 2026 06:27:18 +0000 (0:00:03.308) 0:00:17.225 ******** 2026-04-09 06:27:21.812850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 06:27:21.812863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 06:27:21.812881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 06:27:21.812911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 06:27:27.435112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 06:27:27.435211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 06:27:27.435226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 06:27:27.435251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 06:27:27.435280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 06:27:27.435290 | orchestrator | 2026-04-09 06:27:27.435301 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-09 06:27:27.435312 | orchestrator | Thursday 09 April 2026 06:27:24 +0000 (0:00:05.336) 0:00:22.562 ******** 2026-04-09 06:27:27.435321 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:27:27.435330 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:27:27.435339 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:27:27.435348 | orchestrator | 2026-04-09 06:27:27.435357 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-09 06:27:27.435366 | orchestrator | Thursday 09 April 2026 06:27:25 +0000 (0:00:01.373) 0:00:23.936 ******** 2026-04-09 06:27:27.435375 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:27:27.435399 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:27:27.435409 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:27:27.435417 | orchestrator | 2026-04-09 06:27:27.435427 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-09 06:27:27.435435 | orchestrator | Thursday 09 April 2026 06:27:26 +0000 (0:00:00.544) 0:00:24.480 ******** 2026-04-09 06:27:27.435444 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:27:27.435453 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:27:27.435462 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:27:27.435471 | orchestrator | 2026-04-09 06:27:27.435480 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-09 06:27:27.435489 | orchestrator | Thursday 09 April 2026 06:27:26 +0000 (0:00:00.360) 0:00:24.841 ******** 2026-04-09 06:27:27.435498 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:27:27.435507 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:27:27.435516 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:27:27.435524 | orchestrator | 2026-04-09 06:27:27.435533 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-09 06:27:27.435542 | orchestrator | Thursday 09 April 2026 06:27:27 +0000 (0:00:00.515) 0:00:25.356 ******** 2026-04-09 06:27:27.435552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 06:27:27.435562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 06:27:27.435583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 06:27:27.435593 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:27:27.435610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 06:27:44.114926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 06:27:44.115048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 06:27:44.115069 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:27:44.115087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 06:27:44.115219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 06:27:44.115243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 06:27:44.115261 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:27:44.115298 | orchestrator | 2026-04-09 06:27:44.115330 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-09 06:27:44.115348 | orchestrator | Thursday 09 April 2026 06:27:27 +0000 (0:00:00.616) 0:00:25.973 ******** 2026-04-09 06:27:44.115364 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:27:44.115380 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:27:44.115396 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:27:44.115412 | orchestrator | 2026-04-09 06:27:44.115430 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-09 06:27:44.115472 | orchestrator | Thursday 09 April 2026 06:27:27 +0000 (0:00:00.308) 0:00:26.281 ******** 2026-04-09 06:27:44.115488 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-09 06:27:44.115501 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-09 06:27:44.115514 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-09 06:27:44.115525 | orchestrator | 2026-04-09 06:27:44.115534 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-09 06:27:44.115544 | orchestrator | Thursday 09 April 2026 06:27:29 +0000 (0:00:01.901) 0:00:28.183 ******** 2026-04-09 06:27:44.115554 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 06:27:44.115563 | orchestrator | 2026-04-09 06:27:44.115573 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-09 06:27:44.115583 | orchestrator | Thursday 09 April 2026 06:27:30 +0000 (0:00:00.988) 0:00:29.172 ******** 2026-04-09 06:27:44.115592 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:27:44.115602 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:27:44.115612 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:27:44.115633 | orchestrator | 2026-04-09 06:27:44.115643 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-09 06:27:44.115653 | orchestrator | Thursday 09 April 2026 06:27:31 +0000 (0:00:00.560) 0:00:29.732 ******** 2026-04-09 06:27:44.115662 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 06:27:44.115672 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 06:27:44.115681 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 06:27:44.115691 | orchestrator | 2026-04-09 06:27:44.115725 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-09 06:27:44.115735 | orchestrator | Thursday 09 April 2026 06:27:32 +0000 (0:00:01.230) 0:00:30.963 ******** 2026-04-09 06:27:44.115745 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:27:44.115755 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:27:44.115765 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:27:44.115774 | orchestrator | 2026-04-09 06:27:44.115784 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-09 06:27:44.115794 | orchestrator | Thursday 09 April 2026 06:27:32 +0000 (0:00:00.322) 0:00:31.286 ******** 2026-04-09 06:27:44.115804 | orchestrator | ok: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-09 06:27:44.115813 | orchestrator | ok: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-09 06:27:44.115823 | orchestrator | ok: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-09 06:27:44.115833 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-09 06:27:44.115844 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-09 06:27:44.115853 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-09 06:27:44.115863 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-09 06:27:44.115873 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-09 06:27:44.115883 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-09 06:27:44.115892 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-09 06:27:44.115902 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-09 06:27:44.115919 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-09 06:27:44.115929 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-09 06:27:44.115939 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-09 06:27:44.115949 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-09 06:27:44.115959 | orchestrator | ok: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 06:27:44.115969 | orchestrator | ok: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 06:27:44.115979 | orchestrator | ok: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 06:27:44.115989 | orchestrator | ok: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 06:27:44.115999 | orchestrator | ok: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 06:27:44.116008 | orchestrator | ok: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 06:27:44.116018 | orchestrator | 2026-04-09 06:27:44.116028 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-09 06:27:44.116038 | orchestrator | Thursday 09 April 2026 06:27:41 +0000 (0:00:08.728) 0:00:40.014 ******** 2026-04-09 06:27:44.116047 | orchestrator | ok: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 06:27:44.116063 | orchestrator | ok: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 06:27:44.116073 | orchestrator | ok: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 06:27:44.116083 | orchestrator | ok: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 06:27:44.116099 | orchestrator | ok: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 06:27:48.714786 | orchestrator | ok: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 06:27:48.714914 | orchestrator | 2026-04-09 06:27:48.714939 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-04-09 06:27:48.714960 | orchestrator | Thursday 09 April 2026 06:27:44 +0000 (0:00:02.964) 0:00:42.978 ******** 2026-04-09 06:27:48.714989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 06:27:48.715014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 06:27:48.715062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 06:27:48.715142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 06:27:48.715166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 06:27:48.715185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 06:27:48.715206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 06:27:48.715226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 06:27:48.715254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 06:27:48.715287 | orchestrator | 2026-04-09 06:27:48.715306 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-04-09 06:27:48.715324 | orchestrator | Thursday 09 April 2026 06:27:47 +0000 (0:00:03.228) 0:00:46.206 ******** 2026-04-09 06:27:48.715345 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 06:27:48.715365 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:27:48.715385 | orchestrator | } 2026-04-09 06:27:48.715404 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 06:27:48.715422 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:27:48.715441 | orchestrator | } 2026-04-09 06:27:48.715460 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 06:27:48.715477 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:27:48.715494 | orchestrator | } 2026-04-09 06:27:48.715512 | orchestrator | 2026-04-09 06:27:48.715531 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 06:27:48.715549 | orchestrator | Thursday 09 April 2026 06:27:48 +0000 (0:00:00.552) 0:00:46.759 ******** 2026-04-09 06:27:48.715588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 06:29:52.800166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 06:29:52.800309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 06:29:52.800331 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:29:52.800365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 06:29:52.800403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 06:29:52.800416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 06:29:52.800427 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:29:52.800458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 06:29:52.800471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 06:29:52.800552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 06:29:52.800580 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:29:52.800592 | orchestrator | 2026-04-09 06:29:52.800605 | orchestrator | TASK [keystone : Enable log_bin_trust_function_creators function] ************** 2026-04-09 06:29:52.800618 | orchestrator | Thursday 09 April 2026 06:27:49 +0000 (0:00:01.177) 0:00:47.937 ******** 2026-04-09 06:29:52.800629 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:29:52.800640 | orchestrator | 2026-04-09 06:29:52.800651 | orchestrator | TASK [keystone : Init keystone database upgrade] ******************************* 2026-04-09 06:29:52.800662 | orchestrator | Thursday 09 April 2026 06:27:51 +0000 (0:00:02.248) 0:00:50.185 ******** 2026-04-09 06:29:52.800673 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:29:52.800684 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:29:52.800696 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:29:52.800709 | orchestrator | 2026-04-09 06:29:52.800722 | orchestrator | TASK [keystone : Finish keystone database upgrade] ***************************** 2026-04-09 06:29:52.800735 | orchestrator | Thursday 09 April 2026 06:27:52 +0000 (0:00:00.458) 0:00:50.644 ******** 2026-04-09 06:29:52.800749 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:29:52.800762 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:29:52.800775 | orchestrator | changed: [testbed-node-2] 2026-04-09 06:29:52.800788 | orchestrator | 2026-04-09 06:29:52.800801 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-09 06:29:52.800814 | orchestrator | Thursday 09 April 2026 06:27:53 +0000 (0:00:00.813) 0:00:51.457 ******** 2026-04-09 06:29:52.800827 | orchestrator | 2026-04-09 06:29:52.800839 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-09 06:29:52.800898 | orchestrator | Thursday 09 April 2026 06:27:53 +0000 (0:00:00.074) 0:00:51.532 ******** 2026-04-09 06:29:52.800913 | orchestrator | 2026-04-09 06:29:52.800927 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-09 06:29:52.800940 | orchestrator | Thursday 09 April 2026 06:27:53 +0000 (0:00:00.073) 0:00:51.606 ******** 2026-04-09 06:29:52.800952 | orchestrator | 2026-04-09 06:29:52.800965 | orchestrator | RUNNING HANDLER [keystone : Init keystone database upgrade] ******************** 2026-04-09 06:29:52.800978 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-09 06:29:52.800993 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-09 06:29:52.801019 | orchestrator | Thursday 09 April 2026 06:27:53 +0000 (0:00:00.075) 0:00:51.681 ******** 2026-04-09 06:29:52.801033 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:29:52.801046 | orchestrator | 2026-04-09 06:29:52.801058 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-09 06:29:52.801069 | orchestrator | Thursday 09 April 2026 06:29:00 +0000 (0:01:07.225) 0:01:58.907 ******** 2026-04-09 06:29:52.801080 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:29:52.801091 | orchestrator | changed: [testbed-node-2] 2026-04-09 06:29:52.801102 | orchestrator | changed: [testbed-node-1] 2026-04-09 06:29:52.801113 | orchestrator | 2026-04-09 06:29:52.801124 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-09 06:29:52.801144 | orchestrator | Thursday 09 April 2026 06:29:52 +0000 (0:00:52.231) 0:02:51.138 ******** 2026-04-09 06:30:31.462552 | orchestrator | changed: [testbed-node-2] 2026-04-09 06:30:31.462673 | orchestrator | changed: [testbed-node-1] 2026-04-09 06:30:31.462689 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:30:31.462702 | orchestrator | 2026-04-09 06:30:31.462714 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-09 06:30:31.462755 | orchestrator | Thursday 09 April 2026 06:30:04 +0000 (0:00:11.946) 0:03:03.084 ******** 2026-04-09 06:30:31.462767 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:30:31.462779 | orchestrator | changed: [testbed-node-2] 2026-04-09 06:30:31.462790 | orchestrator | changed: [testbed-node-1] 2026-04-09 06:30:31.462801 | orchestrator | 2026-04-09 06:30:31.462812 | orchestrator | RUNNING HANDLER [keystone : Finish keystone database upgrade] ****************** 2026-04-09 06:30:31.462823 | orchestrator | Thursday 09 April 2026 06:30:17 +0000 (0:00:12.570) 0:03:15.655 ******** 2026-04-09 06:30:31.462834 | orchestrator | changed: [testbed-node-2] 2026-04-09 06:30:31.462845 | orchestrator | 2026-04-09 06:30:31.462856 | orchestrator | TASK [keystone : Disable log_bin_trust_function_creators function] ************* 2026-04-09 06:30:31.462867 | orchestrator | Thursday 09 April 2026 06:30:28 +0000 (0:00:11.003) 0:03:26.659 ******** 2026-04-09 06:30:31.462878 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:30:31.462889 | orchestrator | 2026-04-09 06:30:31.462899 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 06:30:31.462912 | orchestrator | testbed-node-0 : ok=25  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 06:30:31.462925 | orchestrator | testbed-node-1 : ok=19  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 06:30:31.462936 | orchestrator | testbed-node-2 : ok=21  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-09 06:30:31.462946 | orchestrator | 2026-04-09 06:30:31.462958 | orchestrator | 2026-04-09 06:30:31.462968 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 06:30:31.462980 | orchestrator | Thursday 09 April 2026 06:30:31 +0000 (0:00:02.740) 0:03:29.400 ******** 2026-04-09 06:30:31.462991 | orchestrator | =============================================================================== 2026-04-09 06:30:31.463001 | orchestrator | keystone : Init keystone database upgrade ------------------------------ 67.23s 2026-04-09 06:30:31.463028 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 52.23s 2026-04-09 06:30:31.463042 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.57s 2026-04-09 06:30:31.463055 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 11.95s 2026-04-09 06:30:31.463068 | orchestrator | keystone : Finish keystone database upgrade ---------------------------- 11.00s 2026-04-09 06:30:31.463081 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.73s 2026-04-09 06:30:31.463094 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.34s 2026-04-09 06:30:31.463107 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.31s 2026-04-09 06:30:31.463120 | orchestrator | service-check-containers : keystone | Check containers ------------------ 3.23s 2026-04-09 06:30:31.463133 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 2.99s 2026-04-09 06:30:31.463146 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.96s 2026-04-09 06:30:31.463159 | orchestrator | keystone : Disable log_bin_trust_function_creators function ------------- 2.74s 2026-04-09 06:30:31.463171 | orchestrator | keystone : Enable log_bin_trust_function_creators function -------------- 2.25s 2026-04-09 06:30:31.463184 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.23s 2026-04-09 06:30:31.463198 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.90s 2026-04-09 06:30:31.463210 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.37s 2026-04-09 06:30:31.463223 | orchestrator | keystone : Generate the required cron jobs for the node ----------------- 1.23s 2026-04-09 06:30:31.463235 | orchestrator | keystone : Check if Keystone domain-specific config is supplied --------- 1.19s 2026-04-09 06:30:31.463248 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.18s 2026-04-09 06:30:31.463269 | orchestrator | keystone : include_tasks ------------------------------------------------ 1.16s 2026-04-09 06:30:31.653173 | orchestrator | + osism apply -a upgrade placement 2026-04-09 06:30:32.994969 | orchestrator | 2026-04-09 06:30:32 | INFO  | Prepare task for execution of placement. 2026-04-09 06:30:33.062594 | orchestrator | 2026-04-09 06:30:33 | INFO  | Task 94fa9e4a-2261-437b-962b-ce944e10c9d2 (placement) was prepared for execution. 2026-04-09 06:30:33.062717 | orchestrator | 2026-04-09 06:30:33 | INFO  | It takes a moment until task 94fa9e4a-2261-437b-962b-ce944e10c9d2 (placement) has been started and output is visible here. 2026-04-09 06:31:29.050711 | orchestrator | 2026-04-09 06:31:29.050832 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 06:31:29.050850 | orchestrator | 2026-04-09 06:31:29.050862 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 06:31:29.050874 | orchestrator | Thursday 09 April 2026 06:30:38 +0000 (0:00:01.728) 0:00:01.728 ******** 2026-04-09 06:31:29.050885 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:31:29.050897 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:31:29.050909 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:31:29.050920 | orchestrator | 2026-04-09 06:31:29.050931 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 06:31:29.050942 | orchestrator | Thursday 09 April 2026 06:30:39 +0000 (0:00:01.717) 0:00:03.446 ******** 2026-04-09 06:31:29.050954 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-09 06:31:29.050965 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-09 06:31:29.050976 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-09 06:31:29.050987 | orchestrator | 2026-04-09 06:31:29.050998 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-09 06:31:29.051009 | orchestrator | 2026-04-09 06:31:29.051019 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-09 06:31:29.051030 | orchestrator | Thursday 09 April 2026 06:30:41 +0000 (0:00:02.160) 0:00:05.607 ******** 2026-04-09 06:31:29.051042 | orchestrator | included: /ansible/roles/placement/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 06:31:29.051054 | orchestrator | 2026-04-09 06:31:29.051065 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-04-09 06:31:29.051076 | orchestrator | Thursday 09 April 2026 06:30:44 +0000 (0:00:02.703) 0:00:08.310 ******** 2026-04-09 06:31:29.051087 | orchestrator | ok: [testbed-node-0] => (item=placement (placement)) 2026-04-09 06:31:29.051098 | orchestrator | 2026-04-09 06:31:29.051109 | orchestrator | TASK [service-ks-register : placement | Creating/deleting endpoints] *********** 2026-04-09 06:31:29.051120 | orchestrator | Thursday 09 April 2026 06:30:50 +0000 (0:00:05.353) 0:00:13.664 ******** 2026-04-09 06:31:29.051131 | orchestrator | ok: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-09 06:31:29.051142 | orchestrator | ok: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-09 06:31:29.051153 | orchestrator | 2026-04-09 06:31:29.051164 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-09 06:31:29.051175 | orchestrator | Thursday 09 April 2026 06:30:58 +0000 (0:00:08.745) 0:00:22.409 ******** 2026-04-09 06:31:29.051186 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 06:31:29.051197 | orchestrator | 2026-04-09 06:31:29.051208 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-09 06:31:29.051219 | orchestrator | Thursday 09 April 2026 06:31:03 +0000 (0:00:04.690) 0:00:27.100 ******** 2026-04-09 06:31:29.051230 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-09 06:31:29.051241 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 06:31:29.051252 | orchestrator | 2026-04-09 06:31:29.051278 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-09 06:31:29.051312 | orchestrator | Thursday 09 April 2026 06:31:09 +0000 (0:00:06.289) 0:00:33.390 ******** 2026-04-09 06:31:29.051324 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 06:31:29.051335 | orchestrator | 2026-04-09 06:31:29.051346 | orchestrator | TASK [service-ks-register : placement | Granting/revoking user roles] ********** 2026-04-09 06:31:29.051390 | orchestrator | Thursday 09 April 2026 06:31:14 +0000 (0:00:04.652) 0:00:38.043 ******** 2026-04-09 06:31:29.051403 | orchestrator | ok: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-09 06:31:29.051414 | orchestrator | 2026-04-09 06:31:29.051425 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-09 06:31:29.051436 | orchestrator | Thursday 09 April 2026 06:31:19 +0000 (0:00:05.094) 0:00:43.138 ******** 2026-04-09 06:31:29.051447 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:31:29.051458 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:31:29.051468 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:31:29.051480 | orchestrator | 2026-04-09 06:31:29.051491 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-09 06:31:29.051502 | orchestrator | Thursday 09 April 2026 06:31:21 +0000 (0:00:01.740) 0:00:44.878 ******** 2026-04-09 06:31:29.051539 | orchestrator | ok: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 06:31:29.051556 | orchestrator | ok: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 06:31:29.051569 | orchestrator | ok: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 06:31:29.051590 | orchestrator | 2026-04-09 06:31:29.051608 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-09 06:31:29.051619 | orchestrator | Thursday 09 April 2026 06:31:23 +0000 (0:00:02.252) 0:00:47.131 ******** 2026-04-09 06:31:29.051631 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:31:29.051642 | orchestrator | 2026-04-09 06:31:29.051653 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-09 06:31:29.051664 | orchestrator | Thursday 09 April 2026 06:31:24 +0000 (0:00:01.194) 0:00:48.326 ******** 2026-04-09 06:31:29.051675 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:31:29.051686 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:31:29.051697 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:31:29.051708 | orchestrator | 2026-04-09 06:31:29.051719 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-09 06:31:29.051730 | orchestrator | Thursday 09 April 2026 06:31:25 +0000 (0:00:01.326) 0:00:49.652 ******** 2026-04-09 06:31:29.051741 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 06:31:29.051752 | orchestrator | 2026-04-09 06:31:29.051763 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-09 06:31:29.051774 | orchestrator | Thursday 09 April 2026 06:31:27 +0000 (0:00:01.875) 0:00:51.528 ******** 2026-04-09 06:31:29.051793 | orchestrator | ok: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 06:31:32.580258 | orchestrator | ok: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 06:31:32.580411 | orchestrator | ok: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 06:31:32.580448 | orchestrator | 2026-04-09 06:31:32.580471 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-09 06:31:32.580481 | orchestrator | Thursday 09 April 2026 06:31:30 +0000 (0:00:02.549) 0:00:54.077 ******** 2026-04-09 06:31:32.580491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 06:31:32.580501 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:31:32.580527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 06:31:32.580536 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:31:32.580544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 06:31:32.580559 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:31:32.580568 | orchestrator | 2026-04-09 06:31:32.580576 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-09 06:31:32.580584 | orchestrator | Thursday 09 April 2026 06:31:32 +0000 (0:00:01.734) 0:00:55.812 ******** 2026-04-09 06:31:32.580597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 06:31:32.580606 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:31:32.580615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 06:31:32.580623 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:31:32.580640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 06:31:47.542288 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:31:47.542510 | orchestrator | 2026-04-09 06:31:47.542541 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-09 06:31:47.542563 | orchestrator | Thursday 09 April 2026 06:31:33 +0000 (0:00:01.543) 0:00:57.355 ******** 2026-04-09 06:31:47.542587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 06:31:47.542633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 06:31:47.542658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 06:31:47.542679 | orchestrator | 2026-04-09 06:31:47.542692 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-09 06:31:47.542703 | orchestrator | Thursday 09 April 2026 06:31:36 +0000 (0:00:02.484) 0:00:59.840 ******** 2026-04-09 06:31:47.542737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 06:31:47.542780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 06:31:47.542794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 06:31:47.542806 | orchestrator | 2026-04-09 06:31:47.542820 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-09 06:31:47.542833 | orchestrator | Thursday 09 April 2026 06:31:39 +0000 (0:00:03.689) 0:01:03.530 ******** 2026-04-09 06:31:47.542846 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-09 06:31:47.542860 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:31:47.542873 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-09 06:31:47.542886 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:31:47.542898 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-09 06:31:47.542926 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:31:47.542939 | orchestrator | 2026-04-09 06:31:47.542958 | orchestrator | TASK [Configure uWSGI for Placement] ******************************************* 2026-04-09 06:31:47.542970 | orchestrator | Thursday 09 April 2026 06:31:41 +0000 (0:00:01.564) 0:01:05.095 ******** 2026-04-09 06:31:47.542982 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 06:31:47.542994 | orchestrator | 2026-04-09 06:31:47.543005 | orchestrator | TASK [service-uwsgi-config : Copying over placement-api uWSGI config] ********** 2026-04-09 06:31:47.543020 | orchestrator | Thursday 09 April 2026 06:31:43 +0000 (0:00:01.876) 0:01:06.971 ******** 2026-04-09 06:31:47.543038 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:31:47.543053 | orchestrator | changed: [testbed-node-1] 2026-04-09 06:31:47.543068 | orchestrator | changed: [testbed-node-2] 2026-04-09 06:31:47.543083 | orchestrator | 2026-04-09 06:31:47.543100 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-09 06:31:47.543118 | orchestrator | Thursday 09 April 2026 06:31:46 +0000 (0:00:02.922) 0:01:09.893 ******** 2026-04-09 06:31:47.543137 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:31:47.543155 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:31:47.543168 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:31:47.543178 | orchestrator | 2026-04-09 06:31:47.543196 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-09 06:31:54.920499 | orchestrator | Thursday 09 April 2026 06:31:48 +0000 (0:00:02.340) 0:01:12.234 ******** 2026-04-09 06:31:54.920611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 06:31:54.920630 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:31:54.920658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 06:31:54.920670 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:31:54.920680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 06:31:54.920712 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:31:54.920723 | orchestrator | 2026-04-09 06:31:54.920733 | orchestrator | TASK [service-check-containers : placement | Check containers] ***************** 2026-04-09 06:31:54.920743 | orchestrator | Thursday 09 April 2026 06:31:50 +0000 (0:00:02.197) 0:01:14.432 ******** 2026-04-09 06:31:54.920777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 06:31:54.920803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 06:31:54.920822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 06:31:54.920850 | orchestrator | 2026-04-09 06:31:54.920868 | orchestrator | TASK [service-check-containers : placement | Notify handlers to restart containers] *** 2026-04-09 06:31:54.920883 | orchestrator | Thursday 09 April 2026 06:31:53 +0000 (0:00:02.538) 0:01:16.970 ******** 2026-04-09 06:31:54.920898 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 06:31:54.920915 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:31:54.920932 | orchestrator | } 2026-04-09 06:31:54.920949 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 06:31:54.920966 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:31:54.920985 | orchestrator | } 2026-04-09 06:31:54.921001 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 06:31:54.921015 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:31:54.921025 | orchestrator | } 2026-04-09 06:31:54.921035 | orchestrator | 2026-04-09 06:31:54.921045 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 06:31:54.921055 | orchestrator | Thursday 09 April 2026 06:31:54 +0000 (0:00:01.377) 0:01:18.348 ******** 2026-04-09 06:31:54.921075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 06:32:50.026386 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:32:50.026505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 06:32:50.026552 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:32:50.026574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 06:32:50.026625 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:32:50.026644 | orchestrator | 2026-04-09 06:32:50.026664 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-09 06:32:50.026684 | orchestrator | Thursday 09 April 2026 06:31:56 +0000 (0:00:02.125) 0:01:20.473 ******** 2026-04-09 06:32:50.026703 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:32:50.026723 | orchestrator | 2026-04-09 06:32:50.026742 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-09 06:32:50.026761 | orchestrator | Thursday 09 April 2026 06:32:00 +0000 (0:00:03.274) 0:01:23.748 ******** 2026-04-09 06:32:50.026779 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:32:50.026791 | orchestrator | 2026-04-09 06:32:50.026803 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-09 06:32:50.026814 | orchestrator | Thursday 09 April 2026 06:32:03 +0000 (0:00:03.615) 0:01:27.363 ******** 2026-04-09 06:32:50.026825 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:32:50.026836 | orchestrator | 2026-04-09 06:32:50.026847 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-09 06:32:50.026858 | orchestrator | Thursday 09 April 2026 06:32:20 +0000 (0:00:16.305) 0:01:43.669 ******** 2026-04-09 06:32:50.026869 | orchestrator | 2026-04-09 06:32:50.026882 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-09 06:32:50.026895 | orchestrator | Thursday 09 April 2026 06:32:20 +0000 (0:00:00.435) 0:01:44.104 ******** 2026-04-09 06:32:50.026908 | orchestrator | 2026-04-09 06:32:50.026922 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-09 06:32:50.026936 | orchestrator | Thursday 09 April 2026 06:32:20 +0000 (0:00:00.494) 0:01:44.599 ******** 2026-04-09 06:32:50.026949 | orchestrator | 2026-04-09 06:32:50.026962 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-09 06:32:50.026975 | orchestrator | Thursday 09 April 2026 06:32:21 +0000 (0:00:00.787) 0:01:45.386 ******** 2026-04-09 06:32:50.026988 | orchestrator | changed: [testbed-node-2] 2026-04-09 06:32:50.027001 | orchestrator | changed: [testbed-node-1] 2026-04-09 06:32:50.027014 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:32:50.027026 | orchestrator | 2026-04-09 06:32:50.027040 | orchestrator | TASK [placement : Perform Placement online data migration] ********************* 2026-04-09 06:32:50.027053 | orchestrator | Thursday 09 April 2026 06:32:36 +0000 (0:00:14.976) 0:02:00.362 ******** 2026-04-09 06:32:50.027065 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:32:50.027077 | orchestrator | 2026-04-09 06:32:50.027090 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 06:32:50.027104 | orchestrator | testbed-node-0 : ok=24  changed=9  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-09 06:32:50.027138 | orchestrator | testbed-node-1 : ok=14  changed=6  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 06:32:50.027152 | orchestrator | testbed-node-2 : ok=14  changed=6  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 06:32:50.027165 | orchestrator | 2026-04-09 06:32:50.027179 | orchestrator | 2026-04-09 06:32:50.027192 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 06:32:50.027205 | orchestrator | Thursday 09 April 2026 06:32:49 +0000 (0:00:13.004) 0:02:13.367 ******** 2026-04-09 06:32:50.027230 | orchestrator | =============================================================================== 2026-04-09 06:32:50.027241 | orchestrator | placement : Running placement bootstrap container ---------------------- 16.30s 2026-04-09 06:32:50.027282 | orchestrator | placement : Restart placement-api container ---------------------------- 14.98s 2026-04-09 06:32:50.027293 | orchestrator | placement : Perform Placement online data migration -------------------- 13.00s 2026-04-09 06:32:50.027304 | orchestrator | service-ks-register : placement | Creating/deleting endpoints ----------- 8.75s 2026-04-09 06:32:50.027316 | orchestrator | service-ks-register : placement | Creating users ------------------------ 6.29s 2026-04-09 06:32:50.027327 | orchestrator | service-ks-register : placement | Creating/deleting services ------------ 5.35s 2026-04-09 06:32:50.027338 | orchestrator | service-ks-register : placement | Granting/revoking user roles ---------- 5.09s 2026-04-09 06:32:50.027357 | orchestrator | service-ks-register : placement | Creating projects --------------------- 4.69s 2026-04-09 06:32:50.027368 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 4.65s 2026-04-09 06:32:50.027379 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.69s 2026-04-09 06:32:50.027391 | orchestrator | placement : Creating placement databases user and setting permissions --- 3.62s 2026-04-09 06:32:50.027402 | orchestrator | placement : Creating placement databases -------------------------------- 3.27s 2026-04-09 06:32:50.027413 | orchestrator | service-uwsgi-config : Copying over placement-api uWSGI config ---------- 2.92s 2026-04-09 06:32:50.027424 | orchestrator | placement : include_tasks ----------------------------------------------- 2.70s 2026-04-09 06:32:50.027435 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.55s 2026-04-09 06:32:50.027446 | orchestrator | service-check-containers : placement | Check containers ----------------- 2.54s 2026-04-09 06:32:50.027458 | orchestrator | placement : Copying over config.json files for services ----------------- 2.48s 2026-04-09 06:32:50.027469 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.34s 2026-04-09 06:32:50.027480 | orchestrator | placement : Ensuring config directories exist --------------------------- 2.25s 2026-04-09 06:32:50.027491 | orchestrator | placement : Copying over existing policy file --------------------------- 2.20s 2026-04-09 06:32:50.234912 | orchestrator | + osism apply -a upgrade neutron 2026-04-09 06:32:51.632805 | orchestrator | 2026-04-09 06:32:51 | INFO  | Prepare task for execution of neutron. 2026-04-09 06:32:51.697051 | orchestrator | 2026-04-09 06:32:51 | INFO  | Task 6272273b-ac4a-412e-8f5e-ff5227ac4f4e (neutron) was prepared for execution. 2026-04-09 06:32:51.697149 | orchestrator | 2026-04-09 06:32:51 | INFO  | It takes a moment until task 6272273b-ac4a-412e-8f5e-ff5227ac4f4e (neutron) has been started and output is visible here. 2026-04-09 06:33:31.815063 | orchestrator | 2026-04-09 06:33:31.815186 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 06:33:31.815254 | orchestrator | 2026-04-09 06:33:31.815267 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 06:33:31.815279 | orchestrator | Thursday 09 April 2026 06:32:56 +0000 (0:00:01.484) 0:00:01.485 ******** 2026-04-09 06:33:31.815290 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:33:31.815302 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:33:31.815313 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:33:31.815324 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:33:31.815335 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:33:31.815347 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:33:31.815358 | orchestrator | 2026-04-09 06:33:31.815369 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 06:33:31.815380 | orchestrator | Thursday 09 April 2026 06:32:59 +0000 (0:00:02.644) 0:00:04.129 ******** 2026-04-09 06:33:31.815392 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-09 06:33:31.815427 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-09 06:33:31.815463 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-09 06:33:31.815475 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-09 06:33:31.815486 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-09 06:33:31.815497 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-09 06:33:31.815508 | orchestrator | 2026-04-09 06:33:31.815519 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-09 06:33:31.815530 | orchestrator | 2026-04-09 06:33:31.815541 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-09 06:33:31.815552 | orchestrator | Thursday 09 April 2026 06:33:01 +0000 (0:00:02.606) 0:00:06.736 ******** 2026-04-09 06:33:31.815564 | orchestrator | included: /ansible/roles/neutron/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 06:33:31.815576 | orchestrator | 2026-04-09 06:33:31.815600 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-09 06:33:31.815614 | orchestrator | Thursday 09 April 2026 06:33:06 +0000 (0:00:04.795) 0:00:11.531 ******** 2026-04-09 06:33:31.815627 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:33:31.815640 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:33:31.815699 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:33:31.815748 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:33:31.815762 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:33:31.815775 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:33:31.815788 | orchestrator | 2026-04-09 06:33:31.815801 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-09 06:33:31.815814 | orchestrator | Thursday 09 April 2026 06:33:09 +0000 (0:00:03.341) 0:00:14.873 ******** 2026-04-09 06:33:31.815827 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:33:31.815840 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:33:31.815852 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:33:31.815865 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:33:31.815877 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:33:31.815890 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:33:31.815902 | orchestrator | 2026-04-09 06:33:31.815915 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-09 06:33:31.815928 | orchestrator | Thursday 09 April 2026 06:33:12 +0000 (0:00:02.466) 0:00:17.339 ******** 2026-04-09 06:33:31.815942 | orchestrator | ok: [testbed-node-0] => { 2026-04-09 06:33:31.815955 | orchestrator |  "changed": false, 2026-04-09 06:33:31.815966 | orchestrator |  "msg": "All assertions passed" 2026-04-09 06:33:31.815977 | orchestrator | } 2026-04-09 06:33:31.815989 | orchestrator | ok: [testbed-node-1] => { 2026-04-09 06:33:31.816000 | orchestrator |  "changed": false, 2026-04-09 06:33:31.816011 | orchestrator |  "msg": "All assertions passed" 2026-04-09 06:33:31.816022 | orchestrator | } 2026-04-09 06:33:31.816033 | orchestrator | ok: [testbed-node-2] => { 2026-04-09 06:33:31.816044 | orchestrator |  "changed": false, 2026-04-09 06:33:31.816055 | orchestrator |  "msg": "All assertions passed" 2026-04-09 06:33:31.816066 | orchestrator | } 2026-04-09 06:33:31.816093 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 06:33:31.816105 | orchestrator |  "changed": false, 2026-04-09 06:33:31.816116 | orchestrator |  "msg": "All assertions passed" 2026-04-09 06:33:31.816127 | orchestrator | } 2026-04-09 06:33:31.816138 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 06:33:31.816149 | orchestrator |  "changed": false, 2026-04-09 06:33:31.816160 | orchestrator |  "msg": "All assertions passed" 2026-04-09 06:33:31.816171 | orchestrator | } 2026-04-09 06:33:31.816182 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 06:33:31.816213 | orchestrator |  "changed": false, 2026-04-09 06:33:31.816224 | orchestrator |  "msg": "All assertions passed" 2026-04-09 06:33:31.816237 | orchestrator | } 2026-04-09 06:33:31.816248 | orchestrator | 2026-04-09 06:33:31.816259 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-09 06:33:31.816271 | orchestrator | Thursday 09 April 2026 06:33:14 +0000 (0:00:02.028) 0:00:19.368 ******** 2026-04-09 06:33:31.816291 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:33:31.816303 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:33:31.816314 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:33:31.816325 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:33:31.816336 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:33:31.816347 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:33:31.816358 | orchestrator | 2026-04-09 06:33:31.816369 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-09 06:33:31.816380 | orchestrator | Thursday 09 April 2026 06:33:16 +0000 (0:00:02.350) 0:00:21.719 ******** 2026-04-09 06:33:31.816392 | orchestrator | included: /ansible/roles/neutron/tasks/rolling_upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 06:33:31.816404 | orchestrator | 2026-04-09 06:33:31.816415 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-09 06:33:31.816426 | orchestrator | Thursday 09 April 2026 06:33:19 +0000 (0:00:02.541) 0:00:24.261 ******** 2026-04-09 06:33:31.816437 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:33:31.816448 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:33:31.816459 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:33:31.816470 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:33:31.816500 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:33:31.816511 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:33:31.816522 | orchestrator | 2026-04-09 06:33:31.816534 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-09 06:33:31.816545 | orchestrator | Thursday 09 April 2026 06:33:22 +0000 (0:00:03.507) 0:00:27.769 ******** 2026-04-09 06:33:31.816556 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:33:31.816567 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:33:31.816578 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:33:31.816589 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:33:31.816600 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:33:31.816611 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:33:31.816622 | orchestrator | 2026-04-09 06:33:31.816633 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-09 06:33:31.816645 | orchestrator | Thursday 09 April 2026 06:33:25 +0000 (0:00:02.900) 0:00:30.670 ******** 2026-04-09 06:33:31.816656 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:33:31.816667 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:33:31.816678 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:33:31.816689 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:33:31.816700 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:33:31.816711 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:33:31.816722 | orchestrator | 2026-04-09 06:33:31.816733 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-09 06:33:31.816744 | orchestrator | Thursday 09 April 2026 06:33:29 +0000 (0:00:03.626) 0:00:34.296 ******** 2026-04-09 06:33:31.816761 | orchestrator | ok: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:33:31.816789 | orchestrator | ok: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:33:31.816804 | orchestrator | ok: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:33:31.816825 | orchestrator | ok: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 06:33:43.420046 | orchestrator | ok: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 06:33:43.420161 | orchestrator | ok: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 06:33:43.420257 | orchestrator | 2026-04-09 06:33:43.420272 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-09 06:33:43.420284 | orchestrator | Thursday 09 April 2026 06:33:33 +0000 (0:00:03.726) 0:00:38.023 ******** 2026-04-09 06:33:43.420295 | orchestrator | [WARNING]: Skipped 2026-04-09 06:33:43.420306 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-09 06:33:43.420317 | orchestrator | due to this access issue: 2026-04-09 06:33:43.420328 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-09 06:33:43.420351 | orchestrator | a directory 2026-04-09 06:33:43.420361 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 06:33:43.420371 | orchestrator | 2026-04-09 06:33:43.420381 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-09 06:33:43.420391 | orchestrator | Thursday 09 April 2026 06:33:35 +0000 (0:00:02.261) 0:00:40.284 ******** 2026-04-09 06:33:43.420401 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 06:33:43.420412 | orchestrator | 2026-04-09 06:33:43.420422 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-09 06:33:43.420432 | orchestrator | Thursday 09 April 2026 06:33:38 +0000 (0:00:02.684) 0:00:42.969 ******** 2026-04-09 06:33:43.420444 | orchestrator | ok: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:33:43.420476 | orchestrator | ok: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:33:43.420488 | orchestrator | ok: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:33:43.420511 | orchestrator | ok: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 06:33:43.420523 | orchestrator | ok: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 06:33:43.420533 | orchestrator | ok: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 06:33:43.420543 | orchestrator | 2026-04-09 06:33:43.420553 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-09 06:33:43.420563 | orchestrator | Thursday 09 April 2026 06:33:41 +0000 (0:00:03.764) 0:00:46.733 ******** 2026-04-09 06:33:43.420582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:33:47.363925 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:33:47.364029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:33:47.364064 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:33:47.364078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:33:47.364091 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:33:47.364103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:33:47.364116 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:33:47.364127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:33:47.364162 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:33:47.364219 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:33:47.364232 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:33:47.364244 | orchestrator | 2026-04-09 06:33:47.364256 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-09 06:33:47.364268 | orchestrator | Thursday 09 April 2026 06:33:45 +0000 (0:00:03.641) 0:00:50.375 ******** 2026-04-09 06:33:47.364286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:33:47.364299 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:33:47.364311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:33:47.364322 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:33:47.364334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:33:47.364354 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:33:47.364374 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:33:58.140328 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:33:58.140491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:33:58.140525 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:33:58.140546 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:33:58.140567 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:33:58.140585 | orchestrator | 2026-04-09 06:33:58.140604 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-09 06:33:58.140623 | orchestrator | Thursday 09 April 2026 06:33:49 +0000 (0:00:03.735) 0:00:54.110 ******** 2026-04-09 06:33:58.140641 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:33:58.140659 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:33:58.140677 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:33:58.140694 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:33:58.140711 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:33:58.140758 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:33:58.140777 | orchestrator | 2026-04-09 06:33:58.140797 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-09 06:33:58.140817 | orchestrator | Thursday 09 April 2026 06:33:52 +0000 (0:00:03.455) 0:00:57.565 ******** 2026-04-09 06:33:58.140835 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:33:58.140853 | orchestrator | 2026-04-09 06:33:58.140870 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-09 06:33:58.140888 | orchestrator | Thursday 09 April 2026 06:33:53 +0000 (0:00:01.120) 0:00:58.686 ******** 2026-04-09 06:33:58.140905 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:33:58.140922 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:33:58.140939 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:33:58.140956 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:33:58.140973 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:33:58.140989 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:33:58.141006 | orchestrator | 2026-04-09 06:33:58.141023 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-09 06:33:58.141038 | orchestrator | Thursday 09 April 2026 06:33:55 +0000 (0:00:01.937) 0:01:00.623 ******** 2026-04-09 06:33:58.141056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:33:58.141097 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:33:58.141125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:33:58.141144 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:33:58.141230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:33:58.141267 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:33:58.141286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:33:58.141304 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:33:58.141319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:33:58.141336 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:33:58.141369 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:34:09.209971 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:34:09.210087 | orchestrator | 2026-04-09 06:34:09.210096 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-09 06:34:09.210101 | orchestrator | Thursday 09 April 2026 06:33:59 +0000 (0:00:03.503) 0:01:04.127 ******** 2026-04-09 06:34:09.210119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:34:09.210136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:34:09.210141 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 06:34:09.210175 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 06:34:09.210211 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 06:34:09.210217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:34:09.210225 | orchestrator | 2026-04-09 06:34:09.210229 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-09 06:34:09.210233 | orchestrator | Thursday 09 April 2026 06:34:04 +0000 (0:00:05.130) 0:01:09.258 ******** 2026-04-09 06:34:09.210237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:34:09.210242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:34:09.210253 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 06:34:12.761378 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 06:34:12.761513 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 06:34:12.761534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:34:12.761550 | orchestrator | 2026-04-09 06:34:12.761564 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-09 06:34:12.761576 | orchestrator | Thursday 09 April 2026 06:34:10 +0000 (0:00:06.405) 0:01:15.663 ******** 2026-04-09 06:34:12.761589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:34:12.761601 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:34:12.761647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:34:12.761669 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:34:12.761680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:34:12.761692 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:34:12.761704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:34:12.761716 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:34:12.761728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:34:12.761740 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:34:12.761764 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:34:39.692947 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:34:39.693052 | orchestrator | 2026-04-09 06:34:39.693065 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-09 06:34:39.693076 | orchestrator | Thursday 09 April 2026 06:34:13 +0000 (0:00:03.207) 0:01:18.870 ******** 2026-04-09 06:34:39.693086 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:34:39.693095 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:34:39.693105 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:34:39.693162 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:34:39.693173 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:34:39.693182 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:34:39.693191 | orchestrator | 2026-04-09 06:34:39.693201 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-09 06:34:39.693210 | orchestrator | Thursday 09 April 2026 06:34:17 +0000 (0:00:03.797) 0:01:22.668 ******** 2026-04-09 06:34:39.693221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:34:39.693234 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:34:39.693243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:34:39.693252 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:34:39.693262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:34:39.693271 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:34:39.693315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:34:39.693345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:34:39.693357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:34:39.693367 | orchestrator | 2026-04-09 06:34:39.693376 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-09 06:34:39.693385 | orchestrator | Thursday 09 April 2026 06:34:22 +0000 (0:00:04.591) 0:01:27.259 ******** 2026-04-09 06:34:39.693394 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:34:39.693403 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:34:39.693412 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:34:39.693420 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:34:39.693429 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:34:39.693438 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:34:39.693447 | orchestrator | 2026-04-09 06:34:39.693456 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-09 06:34:39.693465 | orchestrator | Thursday 09 April 2026 06:34:25 +0000 (0:00:03.487) 0:01:30.747 ******** 2026-04-09 06:34:39.693474 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:34:39.693489 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:34:39.693500 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:34:39.693510 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:34:39.693520 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:34:39.693530 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:34:39.693540 | orchestrator | 2026-04-09 06:34:39.693551 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-09 06:34:39.693561 | orchestrator | Thursday 09 April 2026 06:34:29 +0000 (0:00:03.429) 0:01:34.176 ******** 2026-04-09 06:34:39.693571 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:34:39.693581 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:34:39.693591 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:34:39.693602 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:34:39.693612 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:34:39.693623 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:34:39.693633 | orchestrator | 2026-04-09 06:34:39.693644 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-09 06:34:39.693654 | orchestrator | Thursday 09 April 2026 06:34:32 +0000 (0:00:03.530) 0:01:37.707 ******** 2026-04-09 06:34:39.693664 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:34:39.693674 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:34:39.693684 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:34:39.693695 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:34:39.693705 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:34:39.693715 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:34:39.693725 | orchestrator | 2026-04-09 06:34:39.693735 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-09 06:34:39.693746 | orchestrator | Thursday 09 April 2026 06:34:36 +0000 (0:00:03.490) 0:01:41.197 ******** 2026-04-09 06:34:39.693756 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:34:39.693766 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:34:39.693781 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:34:39.693791 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:34:39.693802 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:34:39.693812 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:34:39.693822 | orchestrator | 2026-04-09 06:34:39.693832 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-09 06:34:39.693848 | orchestrator | Thursday 09 April 2026 06:34:39 +0000 (0:00:03.414) 0:01:44.612 ******** 2026-04-09 06:34:48.888500 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-09 06:34:48.888613 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-09 06:34:48.888628 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:34:48.888641 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:34:48.888652 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-09 06:34:48.888681 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:34:48.888704 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-09 06:34:48.888716 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:34:48.888727 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-09 06:34:48.888738 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:34:48.888749 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-09 06:34:48.888761 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:34:48.888773 | orchestrator | 2026-04-09 06:34:48.888784 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-09 06:34:48.888795 | orchestrator | Thursday 09 April 2026 06:34:43 +0000 (0:00:03.538) 0:01:48.150 ******** 2026-04-09 06:34:48.888810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:34:48.888850 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:34:48.888866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:34:48.888879 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:34:48.888906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:34:48.888918 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:34:48.888950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:34:48.888963 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:34:48.888983 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:34:48.888995 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:34:48.889006 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:34:48.889018 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:34:48.889029 | orchestrator | 2026-04-09 06:34:48.889042 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-09 06:34:48.889055 | orchestrator | Thursday 09 April 2026 06:34:47 +0000 (0:00:03.828) 0:01:51.980 ******** 2026-04-09 06:34:48.889069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:34:48.889082 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:34:48.889132 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:35:25.846798 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:35:25.846918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:35:25.846966 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:35:25.846981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:35:25.846993 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:35:25.847005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:35:25.847018 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:35:25.847045 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:35:25.847056 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:35:25.847100 | orchestrator | 2026-04-09 06:35:25.847114 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-09 06:35:25.847127 | orchestrator | Thursday 09 April 2026 06:34:50 +0000 (0:00:03.449) 0:01:55.429 ******** 2026-04-09 06:35:25.847147 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:35:25.847158 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:35:25.847169 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:35:25.847199 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:35:25.847211 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:35:25.847221 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:35:25.847232 | orchestrator | 2026-04-09 06:35:25.847244 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-09 06:35:25.847255 | orchestrator | Thursday 09 April 2026 06:34:53 +0000 (0:00:03.386) 0:01:58.816 ******** 2026-04-09 06:35:25.847266 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:35:25.847277 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:35:25.847288 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:35:25.847299 | orchestrator | changed: [testbed-node-4] 2026-04-09 06:35:25.847311 | orchestrator | changed: [testbed-node-5] 2026-04-09 06:35:25.847324 | orchestrator | changed: [testbed-node-3] 2026-04-09 06:35:25.847338 | orchestrator | 2026-04-09 06:35:25.847351 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-09 06:35:25.847364 | orchestrator | Thursday 09 April 2026 06:34:59 +0000 (0:00:05.510) 0:02:04.326 ******** 2026-04-09 06:35:25.847377 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:35:25.847391 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:35:25.847403 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:35:25.847416 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:35:25.847429 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:35:25.847441 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:35:25.847454 | orchestrator | 2026-04-09 06:35:25.847466 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-09 06:35:25.847479 | orchestrator | Thursday 09 April 2026 06:35:02 +0000 (0:00:03.263) 0:02:07.589 ******** 2026-04-09 06:35:25.847493 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:35:25.847505 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:35:25.847519 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:35:25.847531 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:35:25.847545 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:35:25.847558 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:35:25.847570 | orchestrator | 2026-04-09 06:35:25.847583 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-09 06:35:25.847596 | orchestrator | Thursday 09 April 2026 06:35:06 +0000 (0:00:03.588) 0:02:11.178 ******** 2026-04-09 06:35:25.847608 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:35:25.847621 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:35:25.847634 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:35:25.847647 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:35:25.847660 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:35:25.847673 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:35:25.847683 | orchestrator | 2026-04-09 06:35:25.847694 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-09 06:35:25.847705 | orchestrator | Thursday 09 April 2026 06:35:09 +0000 (0:00:03.416) 0:02:14.595 ******** 2026-04-09 06:35:25.847716 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:35:25.847727 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:35:25.847738 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:35:25.847749 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:35:25.847759 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:35:25.847770 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:35:25.847781 | orchestrator | 2026-04-09 06:35:25.847792 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-09 06:35:25.847803 | orchestrator | Thursday 09 April 2026 06:35:13 +0000 (0:00:03.460) 0:02:18.056 ******** 2026-04-09 06:35:25.847814 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:35:25.847825 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:35:25.847843 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:35:25.847854 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:35:25.847865 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:35:25.847876 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:35:25.847887 | orchestrator | 2026-04-09 06:35:25.847898 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-09 06:35:25.847909 | orchestrator | Thursday 09 April 2026 06:35:16 +0000 (0:00:03.494) 0:02:21.551 ******** 2026-04-09 06:35:25.847920 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:35:25.847931 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:35:25.847941 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:35:25.847952 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:35:25.847963 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:35:25.847973 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:35:25.847984 | orchestrator | 2026-04-09 06:35:25.847995 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-09 06:35:25.848006 | orchestrator | Thursday 09 April 2026 06:35:20 +0000 (0:00:03.443) 0:02:24.994 ******** 2026-04-09 06:35:25.848017 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:35:25.848028 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:35:25.848039 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:35:25.848050 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:35:25.848060 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:35:25.848095 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:35:25.848107 | orchestrator | 2026-04-09 06:35:25.848118 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-09 06:35:25.848135 | orchestrator | Thursday 09 April 2026 06:35:23 +0000 (0:00:03.638) 0:02:28.632 ******** 2026-04-09 06:35:25.848146 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-09 06:35:25.848157 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:35:25.848168 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-09 06:35:25.848179 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:35:25.848190 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-09 06:35:25.848201 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:35:25.848212 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-09 06:35:25.848222 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:35:25.848233 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-09 06:35:25.848251 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:35:33.587122 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-09 06:35:33.587271 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:35:33.587298 | orchestrator | 2026-04-09 06:35:33.587319 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-09 06:35:33.587339 | orchestrator | Thursday 09 April 2026 06:35:27 +0000 (0:00:03.440) 0:02:32.073 ******** 2026-04-09 06:35:33.587365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:35:33.587423 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:35:33.587447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:35:33.587470 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:35:33.587492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:35:33.587514 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:35:33.587613 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:35:33.587643 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:35:33.587667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:35:33.587703 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:35:33.587725 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:35:33.587746 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:35:33.587767 | orchestrator | 2026-04-09 06:35:33.587789 | orchestrator | TASK [service-check-containers : neutron | Check containers] ******************* 2026-04-09 06:35:33.587809 | orchestrator | Thursday 09 April 2026 06:35:31 +0000 (0:00:03.955) 0:02:36.029 ******** 2026-04-09 06:35:33.587829 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 06:35:33.587862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:35:33.587902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:35:38.939218 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 06:35:38.939337 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 06:35:38.939357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:35:38.939372 | orchestrator | 2026-04-09 06:35:38.939402 | orchestrator | TASK [service-check-containers : neutron | Notify handlers to restart containers] *** 2026-04-09 06:35:38.939415 | orchestrator | Thursday 09 April 2026 06:35:34 +0000 (0:00:03.843) 0:02:39.873 ******** 2026-04-09 06:35:38.939428 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 06:35:38.939440 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:35:38.939452 | orchestrator | } 2026-04-09 06:35:38.939464 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 06:35:38.939475 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:35:38.939486 | orchestrator | } 2026-04-09 06:35:38.939498 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 06:35:38.939509 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:35:38.939520 | orchestrator | } 2026-04-09 06:35:38.939531 | orchestrator | changed: [testbed-node-3] => { 2026-04-09 06:35:38.939543 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:35:38.939554 | orchestrator | } 2026-04-09 06:35:38.939565 | orchestrator | changed: [testbed-node-4] => { 2026-04-09 06:35:38.939577 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:35:38.939588 | orchestrator | } 2026-04-09 06:35:38.939600 | orchestrator | changed: [testbed-node-5] => { 2026-04-09 06:35:38.939631 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:35:38.939643 | orchestrator | } 2026-04-09 06:35:38.939654 | orchestrator | 2026-04-09 06:35:38.939666 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 06:35:38.939677 | orchestrator | Thursday 09 April 2026 06:35:36 +0000 (0:00:02.029) 0:02:41.903 ******** 2026-04-09 06:35:38.939712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:35:38.939728 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:35:38.939743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:35:38.939758 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:35:38.939771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:35:38.939786 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:35:38.939805 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:35:38.939826 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:35:38.939847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:38:51.154108 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:38:51.154213 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 06:38:51.154228 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:38:51.154236 | orchestrator | 2026-04-09 06:38:51.154244 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-09 06:38:51.154252 | orchestrator | Thursday 09 April 2026 06:35:41 +0000 (0:00:04.056) 0:02:45.959 ******** 2026-04-09 06:38:51.154259 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:38:51.154266 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:38:51.154273 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:38:51.154280 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:38:51.154286 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:38:51.154293 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:38:51.154299 | orchestrator | 2026-04-09 06:38:51.154306 | orchestrator | TASK [neutron : Running Neutron database expand container] ********************* 2026-04-09 06:38:51.154313 | orchestrator | Thursday 09 April 2026 06:35:42 +0000 (0:00:01.872) 0:02:47.832 ******** 2026-04-09 06:38:51.154320 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:38:51.154326 | orchestrator | 2026-04-09 06:38:51.154333 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 06:38:51.154341 | orchestrator | Thursday 09 April 2026 06:36:21 +0000 (0:00:38.598) 0:03:26.431 ******** 2026-04-09 06:38:51.154348 | orchestrator | 2026-04-09 06:38:51.154354 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 06:38:51.154361 | orchestrator | Thursday 09 April 2026 06:36:21 +0000 (0:00:00.446) 0:03:26.877 ******** 2026-04-09 06:38:51.154367 | orchestrator | 2026-04-09 06:38:51.154374 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 06:38:51.154381 | orchestrator | Thursday 09 April 2026 06:36:22 +0000 (0:00:00.670) 0:03:27.547 ******** 2026-04-09 06:38:51.154388 | orchestrator | 2026-04-09 06:38:51.154395 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 06:38:51.154425 | orchestrator | Thursday 09 April 2026 06:36:23 +0000 (0:00:00.465) 0:03:28.013 ******** 2026-04-09 06:38:51.154433 | orchestrator | 2026-04-09 06:38:51.154440 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 06:38:51.154447 | orchestrator | Thursday 09 April 2026 06:36:23 +0000 (0:00:00.480) 0:03:28.494 ******** 2026-04-09 06:38:51.154453 | orchestrator | 2026-04-09 06:38:51.154460 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 06:38:51.154467 | orchestrator | Thursday 09 April 2026 06:36:24 +0000 (0:00:00.456) 0:03:28.950 ******** 2026-04-09 06:38:51.154474 | orchestrator | 2026-04-09 06:38:51.154480 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-09 06:38:51.154501 | orchestrator | Thursday 09 April 2026 06:36:24 +0000 (0:00:00.793) 0:03:29.744 ******** 2026-04-09 06:38:51.154508 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:38:51.154515 | orchestrator | changed: [testbed-node-2] 2026-04-09 06:38:51.154521 | orchestrator | changed: [testbed-node-1] 2026-04-09 06:38:51.154527 | orchestrator | 2026-04-09 06:38:51.154533 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-09 06:38:51.154540 | orchestrator | Thursday 09 April 2026 06:37:13 +0000 (0:00:49.057) 0:04:18.801 ******** 2026-04-09 06:38:51.154547 | orchestrator | changed: [testbed-node-4] 2026-04-09 06:38:51.154554 | orchestrator | changed: [testbed-node-3] 2026-04-09 06:38:51.154560 | orchestrator | changed: [testbed-node-5] 2026-04-09 06:38:51.154566 | orchestrator | 2026-04-09 06:38:51.154573 | orchestrator | TASK [neutron : Checking neutron pending contract scripts] ********************* 2026-04-09 06:38:51.154580 | orchestrator | Thursday 09 April 2026 06:38:22 +0000 (0:01:08.423) 0:05:27.225 ******** 2026-04-09 06:38:51.154587 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:38:51.154593 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:38:51.154599 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:38:51.154605 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:38:51.154612 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:38:51.154619 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:38:51.154626 | orchestrator | 2026-04-09 06:38:51.154633 | orchestrator | TASK [neutron : Stopping all neutron-server for contract db] ******************* 2026-04-09 06:38:51.154640 | orchestrator | Thursday 09 April 2026 06:38:24 +0000 (0:00:02.017) 0:05:29.242 ******** 2026-04-09 06:38:51.154646 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:38:51.154653 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:38:51.154661 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:38:51.154668 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:38:51.154675 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:38:51.154681 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:38:51.154689 | orchestrator | 2026-04-09 06:38:51.154696 | orchestrator | TASK [neutron : Running Neutron database contract container] ******************* 2026-04-09 06:38:51.154704 | orchestrator | Thursday 09 April 2026 06:38:29 +0000 (0:00:05.180) 0:05:34.423 ******** 2026-04-09 06:38:51.154710 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:38:51.154717 | orchestrator | 2026-04-09 06:38:51.154725 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 06:38:51.154747 | orchestrator | Thursday 09 April 2026 06:38:45 +0000 (0:00:15.796) 0:05:50.219 ******** 2026-04-09 06:38:51.154754 | orchestrator | 2026-04-09 06:38:51.154762 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 06:38:51.154769 | orchestrator | Thursday 09 April 2026 06:38:45 +0000 (0:00:00.444) 0:05:50.663 ******** 2026-04-09 06:38:51.154775 | orchestrator | 2026-04-09 06:38:51.154781 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 06:38:51.154788 | orchestrator | Thursday 09 April 2026 06:38:46 +0000 (0:00:00.462) 0:05:51.125 ******** 2026-04-09 06:38:51.154795 | orchestrator | 2026-04-09 06:38:51.154801 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 06:38:51.154808 | orchestrator | Thursday 09 April 2026 06:38:46 +0000 (0:00:00.461) 0:05:51.587 ******** 2026-04-09 06:38:51.154820 | orchestrator | 2026-04-09 06:38:51.154827 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 06:38:51.154834 | orchestrator | Thursday 09 April 2026 06:38:47 +0000 (0:00:00.455) 0:05:52.042 ******** 2026-04-09 06:38:51.154840 | orchestrator | 2026-04-09 06:38:51.154847 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 06:38:51.154854 | orchestrator | Thursday 09 April 2026 06:38:47 +0000 (0:00:00.452) 0:05:52.494 ******** 2026-04-09 06:38:51.154860 | orchestrator | 2026-04-09 06:38:51.154866 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-09 06:38:51.154873 | orchestrator | Thursday 09 April 2026 06:38:48 +0000 (0:00:00.797) 0:05:53.291 ******** 2026-04-09 06:38:51.154880 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:38:51.154887 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:38:51.154913 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:38:51.154920 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:38:51.154926 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:38:51.154933 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:38:51.154940 | orchestrator | 2026-04-09 06:38:51.154947 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 06:38:51.154955 | orchestrator | testbed-node-0 : ok=21  changed=8  unreachable=0 failed=0 skipped=34  rescued=0 ignored=0 2026-04-09 06:38:51.154964 | orchestrator | testbed-node-1 : ok=18  changed=6  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2026-04-09 06:38:51.154971 | orchestrator | testbed-node-2 : ok=18  changed=6  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2026-04-09 06:38:51.154978 | orchestrator | testbed-node-3 : ok=17  changed=6  unreachable=0 failed=0 skipped=34  rescued=0 ignored=0 2026-04-09 06:38:51.154985 | orchestrator | testbed-node-4 : ok=17  changed=6  unreachable=0 failed=0 skipped=34  rescued=0 ignored=0 2026-04-09 06:38:51.154992 | orchestrator | testbed-node-5 : ok=17  changed=6  unreachable=0 failed=0 skipped=34  rescued=0 ignored=0 2026-04-09 06:38:51.154999 | orchestrator | 2026-04-09 06:38:51.155006 | orchestrator | 2026-04-09 06:38:51.155012 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 06:38:51.155019 | orchestrator | Thursday 09 April 2026 06:38:51 +0000 (0:00:02.771) 0:05:56.063 ******** 2026-04-09 06:38:51.155030 | orchestrator | =============================================================================== 2026-04-09 06:38:51.155037 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 68.42s 2026-04-09 06:38:51.155045 | orchestrator | neutron : Restart neutron-server container ----------------------------- 49.06s 2026-04-09 06:38:51.155051 | orchestrator | neutron : Running Neutron database expand container -------------------- 38.60s 2026-04-09 06:38:51.155058 | orchestrator | neutron : Running Neutron database contract container ------------------ 15.80s 2026-04-09 06:38:51.155064 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.41s 2026-04-09 06:38:51.155070 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.51s 2026-04-09 06:38:51.155076 | orchestrator | neutron : Stopping all neutron-server for contract db ------------------- 5.18s 2026-04-09 06:38:51.155082 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.13s 2026-04-09 06:38:51.155088 | orchestrator | neutron : include_tasks ------------------------------------------------- 4.79s 2026-04-09 06:38:51.155095 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.59s 2026-04-09 06:38:51.155102 | orchestrator | service-check-containers : Include tasks -------------------------------- 4.06s 2026-04-09 06:38:51.155113 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 3.95s 2026-04-09 06:38:51.155120 | orchestrator | service-check-containers : neutron | Check containers ------------------- 3.84s 2026-04-09 06:38:51.155127 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 3.83s 2026-04-09 06:38:51.155133 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.80s 2026-04-09 06:38:51.155140 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.76s 2026-04-09 06:38:51.155146 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.74s 2026-04-09 06:38:51.155153 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.73s 2026-04-09 06:38:51.155159 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.64s 2026-04-09 06:38:51.155173 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 3.64s 2026-04-09 06:38:51.767487 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-09 06:38:51.767577 | orchestrator | + osism apply -a reconfigure nova 2026-04-09 06:38:53.170508 | orchestrator | 2026-04-09 06:38:53 | INFO  | Prepare task for execution of nova. 2026-04-09 06:38:53.237121 | orchestrator | 2026-04-09 06:38:53 | INFO  | Task c7f9ec91-f35a-44aa-8ca2-8f3cc981a830 (nova) was prepared for execution. 2026-04-09 06:38:53.237242 | orchestrator | 2026-04-09 06:38:53 | INFO  | It takes a moment until task c7f9ec91-f35a-44aa-8ca2-8f3cc981a830 (nova) has been started and output is visible here. 2026-04-09 06:40:54.043407 | orchestrator | 2026-04-09 06:40:54.043528 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 06:40:54.043546 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-09 06:40:54.043559 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-09 06:40:54.043583 | orchestrator | 2026-04-09 06:40:54.043595 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-09 06:40:54.043606 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-09 06:40:54.043617 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-09 06:40:54.043639 | orchestrator | Thursday 09 April 2026 06:38:57 +0000 (0:00:01.094) 0:00:01.094 ******** 2026-04-09 06:40:54.043651 | orchestrator | changed: [testbed-manager] 2026-04-09 06:40:54.043662 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:40:54.043673 | orchestrator | changed: [testbed-node-1] 2026-04-09 06:40:54.043684 | orchestrator | changed: [testbed-node-2] 2026-04-09 06:40:54.043695 | orchestrator | changed: [testbed-node-3] 2026-04-09 06:40:54.043706 | orchestrator | changed: [testbed-node-4] 2026-04-09 06:40:54.043717 | orchestrator | changed: [testbed-node-5] 2026-04-09 06:40:54.043728 | orchestrator | 2026-04-09 06:40:54.043740 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 06:40:54.043752 | orchestrator | Thursday 09 April 2026 06:39:00 +0000 (0:00:02.458) 0:00:03.553 ******** 2026-04-09 06:40:54.043763 | orchestrator | changed: [testbed-manager] 2026-04-09 06:40:54.043774 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:40:54.043785 | orchestrator | changed: [testbed-node-1] 2026-04-09 06:40:54.043796 | orchestrator | changed: [testbed-node-2] 2026-04-09 06:40:54.043807 | orchestrator | changed: [testbed-node-3] 2026-04-09 06:40:54.043881 | orchestrator | changed: [testbed-node-4] 2026-04-09 06:40:54.043894 | orchestrator | changed: [testbed-node-5] 2026-04-09 06:40:54.043905 | orchestrator | 2026-04-09 06:40:54.043916 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 06:40:54.043928 | orchestrator | Thursday 09 April 2026 06:39:01 +0000 (0:00:00.872) 0:00:04.425 ******** 2026-04-09 06:40:54.043939 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-09 06:40:54.043974 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-09 06:40:54.043987 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-09 06:40:54.043998 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-09 06:40:54.044008 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-09 06:40:54.044019 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-09 06:40:54.044030 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-09 06:40:54.044041 | orchestrator | 2026-04-09 06:40:54.044068 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-09 06:40:54.044080 | orchestrator | 2026-04-09 06:40:54.044091 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-09 06:40:54.044102 | orchestrator | Thursday 09 April 2026 06:39:02 +0000 (0:00:01.280) 0:00:05.706 ******** 2026-04-09 06:40:54.044113 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 06:40:54.044124 | orchestrator | 2026-04-09 06:40:54.044135 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-09 06:40:54.044146 | orchestrator | Thursday 09 April 2026 06:39:03 +0000 (0:00:01.398) 0:00:07.104 ******** 2026-04-09 06:40:54.044158 | orchestrator | ok: [testbed-node-0] => (item=nova_cell0) 2026-04-09 06:40:54.044169 | orchestrator | ok: [testbed-node-0] => (item=nova_api) 2026-04-09 06:40:54.044180 | orchestrator | 2026-04-09 06:40:54.044191 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-09 06:40:54.044202 | orchestrator | Thursday 09 April 2026 06:39:08 +0000 (0:00:04.500) 0:00:11.604 ******** 2026-04-09 06:40:54.044213 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-09 06:40:54.044224 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-09 06:40:54.044236 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:40:54.044247 | orchestrator | 2026-04-09 06:40:54.044258 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-09 06:40:54.044269 | orchestrator | Thursday 09 April 2026 06:39:12 +0000 (0:00:04.647) 0:00:16.252 ******** 2026-04-09 06:40:54.044280 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:40:54.044291 | orchestrator | 2026-04-09 06:40:54.044302 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-09 06:40:54.044313 | orchestrator | Thursday 09 April 2026 06:39:13 +0000 (0:00:00.701) 0:00:16.954 ******** 2026-04-09 06:40:54.044324 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:40:54.044335 | orchestrator | 2026-04-09 06:40:54.044346 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-09 06:40:54.044357 | orchestrator | Thursday 09 April 2026 06:39:14 +0000 (0:00:01.162) 0:00:18.117 ******** 2026-04-09 06:40:54.044368 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:40:54.044379 | orchestrator | 2026-04-09 06:40:54.044390 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-09 06:40:54.044401 | orchestrator | Thursday 09 April 2026 06:39:17 +0000 (0:00:02.944) 0:00:21.061 ******** 2026-04-09 06:40:54.044412 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:40:54.044424 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:40:54.044435 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:40:54.044446 | orchestrator | 2026-04-09 06:40:54.044457 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-09 06:40:54.044468 | orchestrator | Thursday 09 April 2026 06:39:18 +0000 (0:00:00.717) 0:00:21.778 ******** 2026-04-09 06:40:54.044479 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:40:54.044490 | orchestrator | 2026-04-09 06:40:54.044501 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-09 06:40:54.044530 | orchestrator | Thursday 09 April 2026 06:39:54 +0000 (0:00:36.437) 0:00:58.216 ******** 2026-04-09 06:40:54.044541 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:40:54.044552 | orchestrator | 2026-04-09 06:40:54.044564 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-09 06:40:54.044582 | orchestrator | Thursday 09 April 2026 06:40:10 +0000 (0:00:15.197) 0:01:13.414 ******** 2026-04-09 06:40:54.044594 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:40:54.044605 | orchestrator | 2026-04-09 06:40:54.044616 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-09 06:40:54.044627 | orchestrator | Thursday 09 April 2026 06:40:25 +0000 (0:00:15.591) 0:01:29.006 ******** 2026-04-09 06:40:54.044638 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:40:54.044649 | orchestrator | 2026-04-09 06:40:54.044661 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-09 06:40:54.044672 | orchestrator | Thursday 09 April 2026 06:40:26 +0000 (0:00:01.273) 0:01:30.279 ******** 2026-04-09 06:40:54.044683 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:40:54.044694 | orchestrator | 2026-04-09 06:40:54.044705 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-09 06:40:54.044716 | orchestrator | Thursday 09 April 2026 06:40:27 +0000 (0:00:00.603) 0:01:30.883 ******** 2026-04-09 06:40:54.044727 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:40:54.044738 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:40:54.044750 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:40:54.044761 | orchestrator | 2026-04-09 06:40:54.044772 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-09 06:40:54.044783 | orchestrator | Thursday 09 April 2026 06:40:28 +0000 (0:00:00.549) 0:01:31.433 ******** 2026-04-09 06:40:54.044794 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:40:54.044805 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:40:54.044842 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:40:54.044862 | orchestrator | 2026-04-09 06:40:54.044883 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-09 06:40:54.044901 | orchestrator | 2026-04-09 06:40:54.044918 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-09 06:40:54.044935 | orchestrator | Thursday 09 April 2026 06:40:29 +0000 (0:00:00.904) 0:01:32.338 ******** 2026-04-09 06:40:54.044955 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 06:40:54.044973 | orchestrator | 2026-04-09 06:40:54.044984 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-09 06:40:54.044995 | orchestrator | Thursday 09 April 2026 06:40:29 +0000 (0:00:00.993) 0:01:33.331 ******** 2026-04-09 06:40:54.045006 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:40:54.045017 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:40:54.045027 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:40:54.045038 | orchestrator | 2026-04-09 06:40:54.045049 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-09 06:40:54.045060 | orchestrator | Thursday 09 April 2026 06:40:32 +0000 (0:00:02.137) 0:01:35.468 ******** 2026-04-09 06:40:54.045071 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:40:54.045088 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:40:54.045099 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:40:54.045110 | orchestrator | 2026-04-09 06:40:54.045121 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-09 06:40:54.045132 | orchestrator | Thursday 09 April 2026 06:40:34 +0000 (0:00:02.571) 0:01:38.040 ******** 2026-04-09 06:40:54.045143 | orchestrator | skipping: [testbed-node-1] => (item=openstack)  2026-04-09 06:40:54.045154 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:40:54.045165 | orchestrator | skipping: [testbed-node-2] => (item=openstack)  2026-04-09 06:40:54.045176 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:40:54.045187 | orchestrator | ok: [testbed-node-0] => (item=openstack) 2026-04-09 06:40:54.045197 | orchestrator | 2026-04-09 06:40:54.045208 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-09 06:40:54.045219 | orchestrator | Thursday 09 April 2026 06:40:39 +0000 (0:00:04.433) 0:01:42.473 ******** 2026-04-09 06:40:54.045230 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-09 06:40:54.045249 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:40:54.045260 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-09 06:40:54.045271 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:40:54.045282 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-09 06:40:54.045293 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-09 06:40:54.045304 | orchestrator | 2026-04-09 06:40:54.045316 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-09 06:40:54.045335 | orchestrator | Thursday 09 April 2026 06:40:51 +0000 (0:00:12.727) 0:01:55.201 ******** 2026-04-09 06:40:54.045352 | orchestrator | skipping: [testbed-node-0] => (item=openstack)  2026-04-09 06:40:54.045368 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:40:54.045386 | orchestrator | skipping: [testbed-node-1] => (item=openstack)  2026-04-09 06:40:54.045403 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:40:54.045421 | orchestrator | skipping: [testbed-node-2] => (item=openstack)  2026-04-09 06:40:54.045438 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:40:54.045456 | orchestrator | 2026-04-09 06:40:54.045474 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-09 06:40:54.045491 | orchestrator | Thursday 09 April 2026 06:40:52 +0000 (0:00:00.576) 0:01:55.778 ******** 2026-04-09 06:40:54.045510 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-09 06:40:54.045529 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:40:54.045548 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-09 06:40:54.045566 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:40:54.045581 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-09 06:40:54.045592 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:40:54.045603 | orchestrator | 2026-04-09 06:40:54.045613 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-09 06:40:54.045624 | orchestrator | Thursday 09 April 2026 06:40:53 +0000 (0:00:01.084) 0:01:56.862 ******** 2026-04-09 06:40:54.045635 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:40:54.045646 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:40:54.045666 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:42:10.755931 | orchestrator | 2026-04-09 06:42:10.756049 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-09 06:42:10.756067 | orchestrator | Thursday 09 April 2026 06:40:54 +0000 (0:00:00.599) 0:01:57.462 ******** 2026-04-09 06:42:10.756079 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:42:10.756091 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:42:10.756102 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:42:10.756114 | orchestrator | 2026-04-09 06:42:10.756125 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-09 06:42:10.756137 | orchestrator | Thursday 09 April 2026 06:40:55 +0000 (0:00:01.002) 0:01:58.464 ******** 2026-04-09 06:42:10.756148 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:42:10.756159 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:42:10.756170 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:42:10.756180 | orchestrator | 2026-04-09 06:42:10.756192 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-09 06:42:10.756203 | orchestrator | Thursday 09 April 2026 06:40:57 +0000 (0:00:02.711) 0:02:01.176 ******** 2026-04-09 06:42:10.756213 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:42:10.756225 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:42:10.756236 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:42:10.756246 | orchestrator | 2026-04-09 06:42:10.756257 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-09 06:42:10.756268 | orchestrator | Thursday 09 April 2026 06:41:10 +0000 (0:00:12.463) 0:02:13.639 ******** 2026-04-09 06:42:10.756279 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:42:10.756290 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:42:10.756302 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:42:10.756313 | orchestrator | 2026-04-09 06:42:10.756324 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-09 06:42:10.756360 | orchestrator | Thursday 09 April 2026 06:41:21 +0000 (0:00:11.474) 0:02:25.114 ******** 2026-04-09 06:42:10.756371 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:42:10.756383 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:42:10.756393 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:42:10.756404 | orchestrator | 2026-04-09 06:42:10.756416 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-09 06:42:10.756429 | orchestrator | Thursday 09 April 2026 06:41:23 +0000 (0:00:01.414) 0:02:26.529 ******** 2026-04-09 06:42:10.756441 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:42:10.756454 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:42:10.756466 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:42:10.756497 | orchestrator | 2026-04-09 06:42:10.756521 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-09 06:42:10.756535 | orchestrator | Thursday 09 April 2026 06:41:24 +0000 (0:00:00.933) 0:02:27.462 ******** 2026-04-09 06:42:10.756549 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:42:10.756562 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:42:10.756573 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:42:10.756584 | orchestrator | 2026-04-09 06:42:10.756595 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-09 06:42:10.756621 | orchestrator | Thursday 09 April 2026 06:41:37 +0000 (0:00:13.487) 0:02:40.950 ******** 2026-04-09 06:42:10.756633 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:42:10.756644 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:42:10.756655 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:42:10.756666 | orchestrator | 2026-04-09 06:42:10.756677 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-09 06:42:10.756688 | orchestrator | 2026-04-09 06:42:10.756699 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-09 06:42:10.756710 | orchestrator | Thursday 09 April 2026 06:41:38 +0000 (0:00:00.725) 0:02:41.675 ******** 2026-04-09 06:42:10.756721 | orchestrator | included: /ansible/roles/nova/tasks/reconfigure.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 06:42:10.756733 | orchestrator | 2026-04-09 06:42:10.756744 | orchestrator | TASK [service-ks-register : nova | Creating/deleting services] ***************** 2026-04-09 06:42:10.756755 | orchestrator | Thursday 09 April 2026 06:41:39 +0000 (0:00:01.156) 0:02:42.831 ******** 2026-04-09 06:42:10.756787 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-09 06:42:10.756799 | orchestrator | ok: [testbed-node-0] => (item=nova (compute)) 2026-04-09 06:42:10.756810 | orchestrator | 2026-04-09 06:42:10.756821 | orchestrator | TASK [service-ks-register : nova | Creating/deleting endpoints] **************** 2026-04-09 06:42:10.756832 | orchestrator | Thursday 09 April 2026 06:41:43 +0000 (0:00:03.526) 0:02:46.358 ******** 2026-04-09 06:42:10.756843 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-09 06:42:10.756856 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-09 06:42:10.756867 | orchestrator | ok: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-09 06:42:10.756878 | orchestrator | ok: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-09 06:42:10.756889 | orchestrator | 2026-04-09 06:42:10.756900 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-09 06:42:10.756910 | orchestrator | Thursday 09 April 2026 06:41:49 +0000 (0:00:06.592) 0:02:52.951 ******** 2026-04-09 06:42:10.756921 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 06:42:10.756932 | orchestrator | 2026-04-09 06:42:10.756943 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-09 06:42:10.756954 | orchestrator | Thursday 09 April 2026 06:41:53 +0000 (0:00:03.385) 0:02:56.337 ******** 2026-04-09 06:42:10.756974 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-09 06:42:10.756985 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 06:42:10.756996 | orchestrator | 2026-04-09 06:42:10.757007 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-09 06:42:10.757036 | orchestrator | Thursday 09 April 2026 06:41:57 +0000 (0:00:04.946) 0:03:01.283 ******** 2026-04-09 06:42:10.757048 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 06:42:10.757059 | orchestrator | 2026-04-09 06:42:10.757070 | orchestrator | TASK [service-ks-register : nova | Granting/revoking user roles] *************** 2026-04-09 06:42:10.757081 | orchestrator | Thursday 09 April 2026 06:42:01 +0000 (0:00:03.448) 0:03:04.732 ******** 2026-04-09 06:42:10.757092 | orchestrator | ok: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-09 06:42:10.757103 | orchestrator | ok: [testbed-node-0] => (item=nova -> service -> service) 2026-04-09 06:42:10.757114 | orchestrator | 2026-04-09 06:42:10.757125 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-09 06:42:10.757136 | orchestrator | Thursday 09 April 2026 06:42:09 +0000 (0:00:07.708) 0:03:12.440 ******** 2026-04-09 06:42:10.757153 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:10.757176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:10.757190 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:10.757220 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:16.232196 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:42:16.232350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:16.232366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:16.232399 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:42:16.232410 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:42:16.232418 | orchestrator | 2026-04-09 06:42:16.232428 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-09 06:42:16.232438 | orchestrator | Thursday 09 April 2026 06:42:11 +0000 (0:00:02.428) 0:03:14.869 ******** 2026-04-09 06:42:16.232464 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:42:16.232474 | orchestrator | 2026-04-09 06:42:16.232482 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-09 06:42:16.232490 | orchestrator | Thursday 09 April 2026 06:42:11 +0000 (0:00:00.144) 0:03:15.013 ******** 2026-04-09 06:42:16.232498 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:42:16.232506 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:42:16.232515 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:42:16.232522 | orchestrator | 2026-04-09 06:42:16.232531 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-09 06:42:16.232540 | orchestrator | Thursday 09 April 2026 06:42:12 +0000 (0:00:00.343) 0:03:15.357 ******** 2026-04-09 06:42:16.232548 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 06:42:16.232556 | orchestrator | 2026-04-09 06:42:16.232564 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-09 06:42:16.232572 | orchestrator | Thursday 09 April 2026 06:42:13 +0000 (0:00:01.080) 0:03:16.437 ******** 2026-04-09 06:42:16.232580 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:42:16.232588 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:42:16.232597 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:42:16.232605 | orchestrator | 2026-04-09 06:42:16.232613 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-09 06:42:16.232621 | orchestrator | Thursday 09 April 2026 06:42:13 +0000 (0:00:00.338) 0:03:16.775 ******** 2026-04-09 06:42:16.232630 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 06:42:16.232639 | orchestrator | 2026-04-09 06:42:16.232647 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-09 06:42:16.232655 | orchestrator | Thursday 09 April 2026 06:42:14 +0000 (0:00:01.276) 0:03:18.052 ******** 2026-04-09 06:42:16.232669 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:16.232686 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:16.232703 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:18.834156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:18.834276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:18.834311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:18.834323 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:42:18.834353 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:42:18.834368 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:42:18.834377 | orchestrator | 2026-04-09 06:42:18.834387 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-09 06:42:18.834403 | orchestrator | Thursday 09 April 2026 06:42:18 +0000 (0:00:03.310) 0:03:21.362 ******** 2026-04-09 06:42:18.834414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:42:18.834423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:42:18.834434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:42:18.834443 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:42:18.834466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:42:19.871248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:42:19.871345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:42:19.871357 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:42:19.871366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:42:19.871373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:42:19.871432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:42:19.871442 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:42:19.871449 | orchestrator | 2026-04-09 06:42:19.871457 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-09 06:42:19.871466 | orchestrator | Thursday 09 April 2026 06:42:19 +0000 (0:00:01.278) 0:03:22.641 ******** 2026-04-09 06:42:19.871474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:42:19.871483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:42:19.871491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:42:19.871497 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:42:19.871513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:42:22.702961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:42:22.703102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:42:22.703115 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:42:22.703127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:42:22.703156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:42:22.703207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:42:22.703217 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:42:22.703226 | orchestrator | 2026-04-09 06:42:22.703235 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-09 06:42:22.703246 | orchestrator | Thursday 09 April 2026 06:42:20 +0000 (0:00:01.056) 0:03:23.698 ******** 2026-04-09 06:42:22.703254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:22.703264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:22.703279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:22.703302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:30.096482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:30.096630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:30.096672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:42:30.096704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:42:30.096713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:42:30.096722 | orchestrator | 2026-04-09 06:42:30.096733 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-09 06:42:30.096815 | orchestrator | Thursday 09 April 2026 06:42:24 +0000 (0:00:03.686) 0:03:27.384 ******** 2026-04-09 06:42:30.096826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:30.096837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:30.096860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:30.096892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:33.643109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:33.643249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:33.643323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:42:33.643342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:42:33.643357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:42:33.643372 | orchestrator | 2026-04-09 06:42:33.643388 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-09 06:42:33.643426 | orchestrator | Thursday 09 April 2026 06:42:33 +0000 (0:00:09.019) 0:03:36.404 ******** 2026-04-09 06:42:33.643445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:42:33.643472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:42:33.643497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:42:33.643513 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:42:33.643531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:42:33.643558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:42:44.947094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:42:44.947230 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:42:44.947251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:42:44.947281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:42:44.947295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:42:44.947307 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:42:44.947318 | orchestrator | 2026-04-09 06:42:44.947331 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-09 06:42:44.947343 | orchestrator | Thursday 09 April 2026 06:42:33 +0000 (0:00:00.756) 0:03:37.161 ******** 2026-04-09 06:42:44.947354 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:42:44.947365 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:42:44.947376 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:42:44.947387 | orchestrator | 2026-04-09 06:42:44.947398 | orchestrator | TASK [nova : Copying over nova-metadata-wsgi.conf] ***************************** 2026-04-09 06:42:44.947420 | orchestrator | Thursday 09 April 2026 06:42:34 +0000 (0:00:00.758) 0:03:37.920 ******** 2026-04-09 06:42:44.947431 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:42:44.947443 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:42:44.947453 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:42:44.947464 | orchestrator | 2026-04-09 06:42:44.947475 | orchestrator | TASK [nova : Copying over vendordata file for nova services] ******************* 2026-04-09 06:42:44.947502 | orchestrator | Thursday 09 April 2026 06:42:35 +0000 (0:00:01.024) 0:03:38.944 ******** 2026-04-09 06:42:44.947515 | orchestrator | skipping: [testbed-node-0] => (item=nova-metadata)  2026-04-09 06:42:44.947526 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-09 06:42:44.947537 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:42:44.947548 | orchestrator | skipping: [testbed-node-1] => (item=nova-metadata)  2026-04-09 06:42:44.947559 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-09 06:42:44.947570 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:42:44.947580 | orchestrator | skipping: [testbed-node-2] => (item=nova-metadata)  2026-04-09 06:42:44.947591 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-09 06:42:44.947602 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:42:44.947612 | orchestrator | 2026-04-09 06:42:44.947623 | orchestrator | TASK [Configure uWSGI for Nova] ************************************************ 2026-04-09 06:42:44.947634 | orchestrator | Thursday 09 April 2026 06:42:36 +0000 (0:00:00.579) 0:03:39.524 ******** 2026-04-09 06:42:44.947646 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-api', 'port': '8774', 'workers': '2'}) 2026-04-09 06:42:44.947659 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-metadata', 'port': '8775', 'workers': '2'}) 2026-04-09 06:42:44.947670 | orchestrator | 2026-04-09 06:42:44.947681 | orchestrator | TASK [service-uwsgi-config : Copying over nova-api uWSGI config] *************** 2026-04-09 06:42:44.947691 | orchestrator | Thursday 09 April 2026 06:42:38 +0000 (0:00:01.843) 0:03:41.367 ******** 2026-04-09 06:42:44.947702 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:42:44.947713 | orchestrator | changed: [testbed-node-1] 2026-04-09 06:42:44.947724 | orchestrator | changed: [testbed-node-2] 2026-04-09 06:42:44.947758 | orchestrator | 2026-04-09 06:42:44.947770 | orchestrator | TASK [service-uwsgi-config : Copying over nova-metadata uWSGI config] ********** 2026-04-09 06:42:44.947780 | orchestrator | Thursday 09 April 2026 06:42:40 +0000 (0:00:02.684) 0:03:44.052 ******** 2026-04-09 06:42:44.947791 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:42:44.947802 | orchestrator | changed: [testbed-node-1] 2026-04-09 06:42:44.947813 | orchestrator | changed: [testbed-node-2] 2026-04-09 06:42:44.947824 | orchestrator | 2026-04-09 06:42:44.947835 | orchestrator | TASK [service-check-containers : nova | Check containers] ********************** 2026-04-09 06:42:44.947846 | orchestrator | Thursday 09 April 2026 06:42:43 +0000 (0:00:02.514) 0:03:46.567 ******** 2026-04-09 06:42:44.947864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:44.947885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:44.947907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:47.318113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:47.318277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:47.318328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:42:47.318345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:42:47.318383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:42:47.318396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:42:47.318409 | orchestrator | 2026-04-09 06:42:47.318430 | orchestrator | TASK [service-check-containers : nova | Notify handlers to restart containers] *** 2026-04-09 06:42:47.318444 | orchestrator | Thursday 09 April 2026 06:42:46 +0000 (0:00:03.215) 0:03:49.783 ******** 2026-04-09 06:42:47.318456 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 06:42:47.318469 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:42:47.318481 | orchestrator | } 2026-04-09 06:42:47.318493 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 06:42:47.318504 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:42:47.318515 | orchestrator | } 2026-04-09 06:42:47.318526 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 06:42:47.318545 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:42:47.318556 | orchestrator | } 2026-04-09 06:42:47.318568 | orchestrator | 2026-04-09 06:42:47.318580 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 06:42:47.318591 | orchestrator | Thursday 09 April 2026 06:42:46 +0000 (0:00:00.371) 0:03:50.154 ******** 2026-04-09 06:42:47.318603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:42:47.318616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:42:47.318637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:44:14.458371 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:44:14.458514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:44:14.458559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:44:14.458574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:44:14.458589 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:44:14.458610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:44:14.458827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:44:14.458878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:44:14.458891 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:44:14.458904 | orchestrator | 2026-04-09 06:44:14.458916 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-09 06:44:14.458929 | orchestrator | Thursday 09 April 2026 06:42:48 +0000 (0:00:01.406) 0:03:51.561 ******** 2026-04-09 06:44:14.458940 | orchestrator | 2026-04-09 06:44:14.458951 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-09 06:44:14.458962 | orchestrator | Thursday 09 April 2026 06:42:48 +0000 (0:00:00.337) 0:03:51.898 ******** 2026-04-09 06:44:14.458973 | orchestrator | 2026-04-09 06:44:14.458984 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-09 06:44:14.458995 | orchestrator | Thursday 09 April 2026 06:42:48 +0000 (0:00:00.153) 0:03:52.051 ******** 2026-04-09 06:44:14.459005 | orchestrator | 2026-04-09 06:44:14.459017 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-09 06:44:14.459028 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-09 06:44:14.459039 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-09 06:44:14.459061 | orchestrator | Thursday 09 April 2026 06:42:48 +0000 (0:00:00.167) 0:03:52.219 ******** 2026-04-09 06:44:14.459073 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:44:14.459084 | orchestrator | changed: [testbed-node-2] 2026-04-09 06:44:14.459095 | orchestrator | changed: [testbed-node-1] 2026-04-09 06:44:14.459106 | orchestrator | 2026-04-09 06:44:14.459117 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-09 06:44:14.459128 | orchestrator | Thursday 09 April 2026 06:43:15 +0000 (0:00:26.469) 0:04:18.688 ******** 2026-04-09 06:44:14.459139 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:44:14.459150 | orchestrator | changed: [testbed-node-1] 2026-04-09 06:44:14.459161 | orchestrator | changed: [testbed-node-2] 2026-04-09 06:44:14.459172 | orchestrator | 2026-04-09 06:44:14.459183 | orchestrator | RUNNING HANDLER [nova : Restart nova-metadata container] *********************** 2026-04-09 06:44:14.459194 | orchestrator | Thursday 09 April 2026 06:43:27 +0000 (0:00:12.597) 0:04:31.286 ******** 2026-04-09 06:44:14.459205 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:44:14.459215 | orchestrator | changed: [testbed-node-1] 2026-04-09 06:44:14.459226 | orchestrator | changed: [testbed-node-2] 2026-04-09 06:44:14.459237 | orchestrator | 2026-04-09 06:44:14.459248 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-09 06:44:14.459259 | orchestrator | 2026-04-09 06:44:14.459270 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-09 06:44:14.459281 | orchestrator | Thursday 09 April 2026 06:43:38 +0000 (0:00:10.538) 0:04:41.825 ******** 2026-04-09 06:44:14.459292 | orchestrator | included: /ansible/roles/nova-cell/tasks/reconfigure.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 06:44:14.459305 | orchestrator | 2026-04-09 06:44:14.459316 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-09 06:44:14.459334 | orchestrator | Thursday 09 April 2026 06:43:40 +0000 (0:00:01.678) 0:04:43.503 ******** 2026-04-09 06:44:14.459363 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:44:14.459385 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:44:14.459402 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:44:14.459420 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:44:14.459438 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:44:14.459456 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:44:14.459476 | orchestrator | 2026-04-09 06:44:14.459495 | orchestrator | TASK [nova-cell : Get new Libvirt version] ************************************* 2026-04-09 06:44:14.459515 | orchestrator | Thursday 09 April 2026 06:43:41 +0000 (0:00:01.194) 0:04:44.698 ******** 2026-04-09 06:44:14.459544 | orchestrator | changed: [testbed-node-3] 2026-04-09 06:44:51.094738 | orchestrator | 2026-04-09 06:44:51.094872 | orchestrator | TASK [nova-cell : Cache new Libvirt version] *********************************** 2026-04-09 06:44:51.094899 | orchestrator | Thursday 09 April 2026 06:44:14 +0000 (0:00:33.191) 0:05:17.889 ******** 2026-04-09 06:44:51.094919 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:44:51.094940 | orchestrator | 2026-04-09 06:44:51.094959 | orchestrator | TASK [Get nova_libvirt image info] ********************************************* 2026-04-09 06:44:51.094979 | orchestrator | Thursday 09 April 2026 06:44:16 +0000 (0:00:01.526) 0:05:19.415 ******** 2026-04-09 06:44:51.094997 | orchestrator | included: service-image-info for testbed-node-3 2026-04-09 06:44:51.095017 | orchestrator | 2026-04-09 06:44:51.095029 | orchestrator | TASK [service-image-info : community.docker.docker_image_info] ***************** 2026-04-09 06:44:51.095041 | orchestrator | Thursday 09 April 2026 06:44:17 +0000 (0:00:01.197) 0:05:20.613 ******** 2026-04-09 06:44:51.095052 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:44:51.095063 | orchestrator | 2026-04-09 06:44:51.095074 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-09 06:44:51.095102 | orchestrator | Thursday 09 April 2026 06:44:20 +0000 (0:00:03.469) 0:05:24.082 ******** 2026-04-09 06:44:51.095114 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:44:51.095125 | orchestrator | 2026-04-09 06:44:51.095137 | orchestrator | TASK [service-image-info : containers.podman.podman_image_info] **************** 2026-04-09 06:44:51.095148 | orchestrator | Thursday 09 April 2026 06:44:22 +0000 (0:00:01.999) 0:05:26.082 ******** 2026-04-09 06:44:51.095159 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:44:51.095171 | orchestrator | 2026-04-09 06:44:51.095182 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-09 06:44:51.095194 | orchestrator | Thursday 09 April 2026 06:44:24 +0000 (0:00:02.101) 0:05:28.183 ******** 2026-04-09 06:44:51.095205 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:44:51.095216 | orchestrator | 2026-04-09 06:44:51.095227 | orchestrator | TASK [nova-cell : Get container facts] ***************************************** 2026-04-09 06:44:51.095238 | orchestrator | Thursday 09 April 2026 06:44:27 +0000 (0:00:02.384) 0:05:30.567 ******** 2026-04-09 06:44:51.095249 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:44:51.095260 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:44:51.095273 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:44:51.095284 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:44:51.095296 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:44:51.095306 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:44:51.095317 | orchestrator | 2026-04-09 06:44:51.095329 | orchestrator | TASK [nova-cell : Get current Libvirt version] ********************************* 2026-04-09 06:44:51.095340 | orchestrator | Thursday 09 April 2026 06:44:31 +0000 (0:00:04.216) 0:05:34.784 ******** 2026-04-09 06:44:51.095351 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:44:51.095362 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:44:51.095373 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:44:51.095384 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:44:51.095396 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:44:51.095407 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:44:51.095418 | orchestrator | 2026-04-09 06:44:51.095434 | orchestrator | TASK [nova-cell : Check that the new Libvirt version is >= current] ************ 2026-04-09 06:44:51.095476 | orchestrator | Thursday 09 April 2026 06:44:35 +0000 (0:00:04.445) 0:05:39.229 ******** 2026-04-09 06:44:51.095488 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:44:51.095499 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:44:51.095510 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:44:51.095521 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 06:44:51.095532 | orchestrator |  "changed": false, 2026-04-09 06:44:51.095543 | orchestrator |  "msg": "Libvirt version check successful: target 10.0.0 >= current 10.0.0.\n" 2026-04-09 06:44:51.095555 | orchestrator | } 2026-04-09 06:44:51.095567 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 06:44:51.095578 | orchestrator |  "changed": false, 2026-04-09 06:44:51.095588 | orchestrator |  "msg": "Libvirt version check successful: target 10.0.0 >= current 10.0.0.\n" 2026-04-09 06:44:51.095666 | orchestrator | } 2026-04-09 06:44:51.095677 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 06:44:51.095688 | orchestrator |  "changed": false, 2026-04-09 06:44:51.095699 | orchestrator |  "msg": "Libvirt version check successful: target 10.0.0 >= current 10.0.0.\n" 2026-04-09 06:44:51.095710 | orchestrator | } 2026-04-09 06:44:51.095722 | orchestrator | 2026-04-09 06:44:51.095733 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-09 06:44:51.095744 | orchestrator | Thursday 09 April 2026 06:44:41 +0000 (0:00:05.923) 0:05:45.153 ******** 2026-04-09 06:44:51.095755 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:44:51.095766 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:44:51.095777 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:44:51.095788 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 06:44:51.095799 | orchestrator | 2026-04-09 06:44:51.095810 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-09 06:44:51.095821 | orchestrator | Thursday 09 April 2026 06:44:43 +0000 (0:00:01.476) 0:05:46.629 ******** 2026-04-09 06:44:51.095832 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-09 06:44:51.095844 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-09 06:44:51.095854 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-09 06:44:51.095865 | orchestrator | 2026-04-09 06:44:51.095876 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-09 06:44:51.095887 | orchestrator | Thursday 09 April 2026 06:44:43 +0000 (0:00:00.675) 0:05:47.304 ******** 2026-04-09 06:44:51.095902 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-09 06:44:51.095921 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-09 06:44:51.095941 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-09 06:44:51.095957 | orchestrator | 2026-04-09 06:44:51.095976 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-09 06:44:51.095996 | orchestrator | Thursday 09 April 2026 06:44:45 +0000 (0:00:01.450) 0:05:48.755 ******** 2026-04-09 06:44:51.096010 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-09 06:44:51.096030 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:44:51.096071 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-09 06:44:51.096092 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:44:51.096111 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-09 06:44:51.096130 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:44:51.096150 | orchestrator | 2026-04-09 06:44:51.096168 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-09 06:44:51.096187 | orchestrator | Thursday 09 April 2026 06:44:46 +0000 (0:00:00.602) 0:05:49.358 ******** 2026-04-09 06:44:51.096206 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 06:44:51.096225 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 06:44:51.096244 | orchestrator | ok: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-09 06:44:51.096278 | orchestrator | ok: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-09 06:44:51.096296 | orchestrator | ok: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-09 06:44:51.096318 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:44:51.096330 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 06:44:51.096341 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 06:44:51.096352 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:44:51.096364 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 06:44:51.096375 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 06:44:51.096386 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:44:51.096397 | orchestrator | ok: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-09 06:44:51.096409 | orchestrator | ok: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-09 06:44:51.096420 | orchestrator | ok: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-09 06:44:51.096431 | orchestrator | 2026-04-09 06:44:51.096442 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-09 06:44:51.096453 | orchestrator | Thursday 09 April 2026 06:44:47 +0000 (0:00:01.012) 0:05:50.370 ******** 2026-04-09 06:44:51.096465 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:44:51.096476 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:44:51.096487 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:44:51.096578 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:44:51.096625 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:44:51.096639 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:44:51.096650 | orchestrator | 2026-04-09 06:44:51.096662 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-09 06:44:51.096673 | orchestrator | Thursday 09 April 2026 06:44:48 +0000 (0:00:01.161) 0:05:51.531 ******** 2026-04-09 06:44:51.096683 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:44:51.096695 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:44:51.096705 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:44:51.096716 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:44:51.096727 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:44:51.096738 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:44:51.096749 | orchestrator | 2026-04-09 06:44:51.096760 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-09 06:44:51.096771 | orchestrator | Thursday 09 April 2026 06:44:49 +0000 (0:00:01.506) 0:05:53.037 ******** 2026-04-09 06:44:51.096786 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 06:44:51.096802 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 06:44:51.096837 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 06:44:52.043084 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 06:44:52.043196 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:44:52.043212 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:44:52.043225 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 06:44:52.043238 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:44:52.043295 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 06:44:52.043316 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:44:52.043328 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:44:52.043340 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:44:52.043352 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:44:52.043373 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:44:52.043392 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:44:56.762249 | orchestrator | 2026-04-09 06:44:56.762364 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-09 06:44:56.762401 | orchestrator | Thursday 09 April 2026 06:44:52 +0000 (0:00:02.441) 0:05:55.479 ******** 2026-04-09 06:44:56.762415 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 06:44:56.762427 | orchestrator | 2026-04-09 06:44:56.762449 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-09 06:44:56.762461 | orchestrator | Thursday 09 April 2026 06:44:53 +0000 (0:00:01.415) 0:05:56.894 ******** 2026-04-09 06:44:56.762476 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 06:44:56.762491 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 06:44:56.762502 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 06:44:56.762537 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:44:56.762576 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:44:56.762620 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:44:56.762633 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 06:44:56.762644 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 06:44:56.762656 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 06:44:56.762675 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:44:56.762687 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:44:56.762710 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:44:58.855550 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:44:58.855682 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:44:58.855715 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:44:58.855725 | orchestrator | 2026-04-09 06:44:58.855735 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-09 06:44:58.855745 | orchestrator | Thursday 09 April 2026 06:44:57 +0000 (0:00:03.872) 0:06:00.767 ******** 2026-04-09 06:44:58.855755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:44:58.855778 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:44:58.855803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:44:58.855812 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:44:58.855826 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:44:58.855836 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:44:58.855845 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:44:58.855857 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:44:58.855866 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:44:58.855882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:45:01.008206 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:45:01.008334 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:45:01.008354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:45:01.008368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:45:01.008380 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:45:01.008392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:45:01.008418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:45:01.008431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:45:01.008442 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:45:01.008471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:45:01.008491 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:45:01.008503 | orchestrator | 2026-04-09 06:45:01.008515 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-09 06:45:01.008528 | orchestrator | Thursday 09 April 2026 06:44:59 +0000 (0:00:02.393) 0:06:03.160 ******** 2026-04-09 06:45:01.008540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:45:01.008553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:45:01.008564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:45:01.008576 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:45:01.008662 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:45:01.008684 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:45:05.955497 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:45:05.955674 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:45:05.955702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:45:05.955724 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:45:05.955763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:45:05.955783 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:45:05.955804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:45:05.955866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:45:05.955888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:45:05.955907 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:45:05.955927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:45:05.955947 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:45:05.955966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:45:05.955993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:45:05.956013 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:45:05.956033 | orchestrator | 2026-04-09 06:45:05.956056 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-09 06:45:05.956089 | orchestrator | Thursday 09 April 2026 06:45:02 +0000 (0:00:02.556) 0:06:05.717 ******** 2026-04-09 06:45:05.956110 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:45:05.956131 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:45:05.956152 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:45:05.956175 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 06:45:05.956196 | orchestrator | 2026-04-09 06:45:05.956217 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-09 06:45:05.956238 | orchestrator | Thursday 09 April 2026 06:45:03 +0000 (0:00:01.296) 0:06:07.013 ******** 2026-04-09 06:45:05.956259 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 06:45:05.956281 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 06:45:05.956302 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 06:45:05.956323 | orchestrator | 2026-04-09 06:45:05.956342 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-09 06:45:05.956363 | orchestrator | Thursday 09 April 2026 06:45:04 +0000 (0:00:01.264) 0:06:08.277 ******** 2026-04-09 06:45:05.956383 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 06:45:05.956403 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 06:45:05.956421 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 06:45:05.956440 | orchestrator | 2026-04-09 06:45:05.956459 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-09 06:45:05.956487 | orchestrator | Thursday 09 April 2026 06:45:05 +0000 (0:00:01.013) 0:06:09.291 ******** 2026-04-09 06:45:31.623104 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:45:31.623219 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:45:31.623235 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:45:31.623247 | orchestrator | 2026-04-09 06:45:31.623260 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-09 06:45:31.623272 | orchestrator | Thursday 09 April 2026 06:45:06 +0000 (0:00:00.493) 0:06:09.784 ******** 2026-04-09 06:45:31.623283 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:45:31.623295 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:45:31.623306 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:45:31.623317 | orchestrator | 2026-04-09 06:45:31.623328 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-09 06:45:31.623339 | orchestrator | Thursday 09 April 2026 06:45:07 +0000 (0:00:00.803) 0:06:10.588 ******** 2026-04-09 06:45:31.623350 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-09 06:45:31.623362 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-09 06:45:31.623373 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-09 06:45:31.623384 | orchestrator | 2026-04-09 06:45:31.623396 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-09 06:45:31.623407 | orchestrator | Thursday 09 April 2026 06:45:08 +0000 (0:00:01.140) 0:06:11.728 ******** 2026-04-09 06:45:31.623418 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-09 06:45:31.623429 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-09 06:45:31.623440 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-09 06:45:31.623451 | orchestrator | 2026-04-09 06:45:31.623462 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-09 06:45:31.623473 | orchestrator | Thursday 09 April 2026 06:45:09 +0000 (0:00:01.094) 0:06:12.823 ******** 2026-04-09 06:45:31.623484 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-09 06:45:31.623495 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-09 06:45:31.623506 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-09 06:45:31.623517 | orchestrator | ok: [testbed-node-3] => (item=nova-libvirt) 2026-04-09 06:45:31.623528 | orchestrator | ok: [testbed-node-4] => (item=nova-libvirt) 2026-04-09 06:45:31.623539 | orchestrator | ok: [testbed-node-5] => (item=nova-libvirt) 2026-04-09 06:45:31.623550 | orchestrator | 2026-04-09 06:45:31.623664 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-09 06:45:31.623703 | orchestrator | Thursday 09 April 2026 06:45:13 +0000 (0:00:03.626) 0:06:16.449 ******** 2026-04-09 06:45:31.623717 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:45:31.623731 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:45:31.623743 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:45:31.623757 | orchestrator | 2026-04-09 06:45:31.623770 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-09 06:45:31.623783 | orchestrator | Thursday 09 April 2026 06:45:13 +0000 (0:00:00.589) 0:06:17.038 ******** 2026-04-09 06:45:31.623795 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:45:31.623809 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:45:31.623821 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:45:31.623833 | orchestrator | 2026-04-09 06:45:31.623846 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-09 06:45:31.623859 | orchestrator | Thursday 09 April 2026 06:45:14 +0000 (0:00:00.375) 0:06:17.414 ******** 2026-04-09 06:45:31.623872 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:45:31.623884 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:45:31.623896 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:45:31.623907 | orchestrator | 2026-04-09 06:45:31.623918 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-09 06:45:31.623929 | orchestrator | Thursday 09 April 2026 06:45:15 +0000 (0:00:01.399) 0:06:18.814 ******** 2026-04-09 06:45:31.623956 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-09 06:45:31.623969 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-09 06:45:31.623980 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-09 06:45:31.624040 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-09 06:45:31.624053 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-09 06:45:31.624064 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-09 06:45:31.624075 | orchestrator | 2026-04-09 06:45:31.624086 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-09 06:45:31.624097 | orchestrator | Thursday 09 April 2026 06:45:18 +0000 (0:00:03.485) 0:06:22.299 ******** 2026-04-09 06:45:31.624109 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-09 06:45:31.624120 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-09 06:45:31.624131 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-09 06:45:31.624142 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-09 06:45:31.624172 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:45:31.624184 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-09 06:45:31.624195 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:45:31.624206 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-09 06:45:31.624217 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:45:31.624228 | orchestrator | 2026-04-09 06:45:31.624239 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-04-09 06:45:31.624250 | orchestrator | Thursday 09 April 2026 06:45:23 +0000 (0:00:04.352) 0:06:26.652 ******** 2026-04-09 06:45:31.624261 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:45:31.624282 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:45:31.624293 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:45:31.624305 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-04-09 06:45:31.624316 | orchestrator | 2026-04-09 06:45:31.624327 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-04-09 06:45:31.624338 | orchestrator | Thursday 09 April 2026 06:45:25 +0000 (0:00:02.598) 0:06:29.251 ******** 2026-04-09 06:45:31.624349 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 06:45:31.624360 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 06:45:31.624371 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 06:45:31.624382 | orchestrator | 2026-04-09 06:45:31.624394 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-04-09 06:45:31.624405 | orchestrator | Thursday 09 April 2026 06:45:26 +0000 (0:00:01.015) 0:06:30.266 ******** 2026-04-09 06:45:31.624416 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:45:31.624427 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:45:31.624437 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:45:31.624448 | orchestrator | 2026-04-09 06:45:31.624459 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-09 06:45:31.624470 | orchestrator | Thursday 09 April 2026 06:45:27 +0000 (0:00:00.365) 0:06:30.631 ******** 2026-04-09 06:45:31.624481 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:45:31.624492 | orchestrator | 2026-04-09 06:45:31.624503 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-09 06:45:31.624514 | orchestrator | Thursday 09 April 2026 06:45:27 +0000 (0:00:00.148) 0:06:30.780 ******** 2026-04-09 06:45:31.624525 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:45:31.624536 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:45:31.624547 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:45:31.624577 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:45:31.624589 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:45:31.624600 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:45:31.624611 | orchestrator | 2026-04-09 06:45:31.624622 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-09 06:45:31.624633 | orchestrator | Thursday 09 April 2026 06:45:28 +0000 (0:00:00.830) 0:06:31.610 ******** 2026-04-09 06:45:31.624644 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 06:45:31.624655 | orchestrator | 2026-04-09 06:45:31.624666 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-09 06:45:31.624677 | orchestrator | Thursday 09 April 2026 06:45:29 +0000 (0:00:00.802) 0:06:32.412 ******** 2026-04-09 06:45:31.624688 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:45:31.624699 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:45:31.624710 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:45:31.624721 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:45:31.624732 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:45:31.624743 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:45:31.624753 | orchestrator | 2026-04-09 06:45:31.624764 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-09 06:45:31.624775 | orchestrator | Thursday 09 April 2026 06:45:29 +0000 (0:00:00.624) 0:06:33.037 ******** 2026-04-09 06:45:31.624795 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 06:45:31.624826 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 06:45:33.544173 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 06:45:33.544290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:45:33.544308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:45:33.544338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:45:33.544373 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 06:45:33.544387 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 06:45:33.544420 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 06:45:33.544433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:45:33.544445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:45:33.544456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:45:33.544473 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:45:33.544494 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:45:33.544525 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:45:38.176861 | orchestrator | 2026-04-09 06:45:38.176988 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-09 06:45:38.177013 | orchestrator | Thursday 09 April 2026 06:45:33 +0000 (0:00:03.947) 0:06:36.984 ******** 2026-04-09 06:45:38.177036 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:45:38.177059 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:45:38.177099 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:45:38.177145 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:45:38.177168 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:45:38.177212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:45:38.177234 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:45:38.177264 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:45:38.177300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:45:38.177322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:45:38.177353 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:45:55.444408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:45:55.444529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:45:55.444729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:45:55.444860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:45:55.444883 | orchestrator | 2026-04-09 06:45:55.444905 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-09 06:45:55.444927 | orchestrator | Thursday 09 April 2026 06:45:40 +0000 (0:00:07.046) 0:06:44.031 ******** 2026-04-09 06:45:55.444947 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:45:55.444969 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:45:55.444989 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:45:55.445049 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:45:55.445063 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:45:55.445077 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:45:55.445091 | orchestrator | 2026-04-09 06:45:55.445104 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-09 06:45:55.445116 | orchestrator | Thursday 09 April 2026 06:45:42 +0000 (0:00:01.439) 0:06:45.470 ******** 2026-04-09 06:45:55.445129 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-09 06:45:55.445150 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-09 06:45:55.445171 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-09 06:45:55.445187 | orchestrator | ok: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-09 06:45:55.445198 | orchestrator | ok: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-09 06:45:55.445209 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-09 06:45:55.445251 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:45:55.445264 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-09 06:45:55.445275 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:45:55.445286 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-09 06:45:55.445297 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:45:55.445310 | orchestrator | ok: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-09 06:45:55.445354 | orchestrator | ok: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-09 06:45:55.445374 | orchestrator | ok: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-09 06:45:55.445394 | orchestrator | ok: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-09 06:45:55.445412 | orchestrator | 2026-04-09 06:45:55.445426 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-09 06:45:55.445437 | orchestrator | Thursday 09 April 2026 06:45:46 +0000 (0:00:03.964) 0:06:49.435 ******** 2026-04-09 06:45:55.445461 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:45:55.445472 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:45:55.445483 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:45:55.445495 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:45:55.445506 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:45:55.445517 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:45:55.445528 | orchestrator | 2026-04-09 06:45:55.445567 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-09 06:45:55.445578 | orchestrator | Thursday 09 April 2026 06:45:46 +0000 (0:00:00.643) 0:06:50.078 ******** 2026-04-09 06:45:55.445592 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-09 06:45:55.445610 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-09 06:45:55.445629 | orchestrator | ok: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-09 06:45:55.445648 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-09 06:45:55.445666 | orchestrator | ok: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-09 06:45:55.445685 | orchestrator | ok: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-09 06:45:55.445697 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-09 06:45:55.445717 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-09 06:45:55.445728 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-09 06:45:55.445739 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-09 06:45:55.445749 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:45:55.445760 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-09 06:45:55.445771 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:45:55.445782 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-09 06:45:55.445868 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:45:55.445880 | orchestrator | ok: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-09 06:45:55.445891 | orchestrator | ok: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-09 06:45:55.445902 | orchestrator | ok: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-09 06:45:55.445913 | orchestrator | ok: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-09 06:45:55.445932 | orchestrator | ok: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-09 06:45:55.445949 | orchestrator | ok: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-09 06:45:55.445968 | orchestrator | 2026-04-09 06:45:55.446094 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-09 06:45:55.446115 | orchestrator | Thursday 09 April 2026 06:45:52 +0000 (0:00:05.414) 0:06:55.493 ******** 2026-04-09 06:45:55.446136 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 06:45:55.446155 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 06:45:55.446174 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 06:45:55.446208 | orchestrator | ok: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 06:45:55.446222 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-09 06:45:55.446233 | orchestrator | ok: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 06:45:55.446244 | orchestrator | ok: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 06:45:55.446255 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-09 06:45:55.446266 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-09 06:45:55.446277 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 06:45:55.446299 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 06:46:06.109893 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 06:46:06.110953 | orchestrator | ok: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 06:46:06.111019 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-09 06:46:06.111040 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:46:06.111053 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-09 06:46:06.111066 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:46:06.111085 | orchestrator | ok: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 06:46:06.111104 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-09 06:46:06.111121 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:46:06.111139 | orchestrator | ok: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 06:46:06.111158 | orchestrator | ok: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 06:46:06.111174 | orchestrator | ok: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 06:46:06.111192 | orchestrator | ok: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 06:46:06.111210 | orchestrator | ok: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 06:46:06.111227 | orchestrator | ok: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 06:46:06.111245 | orchestrator | ok: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 06:46:06.111261 | orchestrator | 2026-04-09 06:46:06.111279 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-09 06:46:06.111296 | orchestrator | Thursday 09 April 2026 06:45:59 +0000 (0:00:07.126) 0:07:02.619 ******** 2026-04-09 06:46:06.111313 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:46:06.111332 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:46:06.111350 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:46:06.111391 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:46:06.111411 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:46:06.111428 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:46:06.111446 | orchestrator | 2026-04-09 06:46:06.111465 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-09 06:46:06.111503 | orchestrator | Thursday 09 April 2026 06:46:00 +0000 (0:00:00.841) 0:07:03.461 ******** 2026-04-09 06:46:06.111563 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:46:06.111583 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:46:06.111595 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:46:06.111606 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:46:06.111617 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:46:06.111628 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:46:06.111639 | orchestrator | 2026-04-09 06:46:06.111651 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-09 06:46:06.111683 | orchestrator | Thursday 09 April 2026 06:46:00 +0000 (0:00:00.654) 0:07:04.115 ******** 2026-04-09 06:46:06.111695 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:46:06.111706 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:46:06.111723 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:46:06.111741 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:46:06.111769 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:46:06.111790 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:46:06.111808 | orchestrator | 2026-04-09 06:46:06.111825 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-04-09 06:46:06.111843 | orchestrator | Thursday 09 April 2026 06:46:02 +0000 (0:00:02.133) 0:07:06.249 ******** 2026-04-09 06:46:06.111860 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:46:06.111876 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:46:06.111893 | orchestrator | changed: [testbed-node-3] 2026-04-09 06:46:06.111911 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:46:06.111929 | orchestrator | changed: [testbed-node-4] 2026-04-09 06:46:06.111948 | orchestrator | changed: [testbed-node-5] 2026-04-09 06:46:06.111967 | orchestrator | 2026-04-09 06:46:06.111985 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-09 06:46:06.112001 | orchestrator | Thursday 09 April 2026 06:46:04 +0000 (0:00:01.889) 0:07:08.139 ******** 2026-04-09 06:46:06.112016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:46:06.112059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:46:06.112074 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:46:06.112086 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:46:06.112107 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:46:06.112131 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:46:06.112144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:46:06.112155 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:46:06.112176 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:46:08.944355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:46:08.944480 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:46:08.944567 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:46:08.944592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:46:08.944605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:46:08.944617 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:46:08.944629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:46:08.944641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:46:08.944653 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:46:08.944682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:46:08.944708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:46:08.944721 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:46:08.944733 | orchestrator | 2026-04-09 06:46:08.944745 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-09 06:46:08.944758 | orchestrator | Thursday 09 April 2026 06:46:06 +0000 (0:00:01.660) 0:07:09.799 ******** 2026-04-09 06:46:08.944769 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-09 06:46:08.944781 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-09 06:46:08.944791 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:46:08.944803 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-09 06:46:08.944813 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-09 06:46:08.944824 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:46:08.944835 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-09 06:46:08.944846 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-09 06:46:08.944857 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:46:08.944868 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-09 06:46:08.944879 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-09 06:46:08.944890 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:46:08.944901 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-09 06:46:08.944911 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-09 06:46:08.944922 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:46:08.944934 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-09 06:46:08.944944 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-09 06:46:08.944955 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:46:08.944966 | orchestrator | 2026-04-09 06:46:08.944978 | orchestrator | TASK [service-check-containers : nova_cell | Check containers] ***************** 2026-04-09 06:46:08.944989 | orchestrator | Thursday 09 April 2026 06:46:07 +0000 (0:00:00.742) 0:07:10.542 ******** 2026-04-09 06:46:08.945001 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 06:46:08.945022 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 06:46:10.256114 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 06:46:10.256236 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 06:46:10.256254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:46:10.256267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:46:10.256278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:46:10.256304 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 06:46:10.256355 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 06:46:10.256386 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:46:10.256399 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:46:10.256411 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:46:10.256423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:46:10.256449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:46:12.275759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:46:12.275863 | orchestrator | 2026-04-09 06:46:12.275880 | orchestrator | TASK [service-check-containers : nova_cell | Notify handlers to restart containers] *** 2026-04-09 06:46:12.275894 | orchestrator | Thursday 09 April 2026 06:46:10 +0000 (0:00:03.184) 0:07:13.727 ******** 2026-04-09 06:46:12.275905 | orchestrator | changed: [testbed-node-3] => { 2026-04-09 06:46:12.275933 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:46:12.275944 | orchestrator | } 2026-04-09 06:46:12.275955 | orchestrator | changed: [testbed-node-4] => { 2026-04-09 06:46:12.275965 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:46:12.275975 | orchestrator | } 2026-04-09 06:46:12.275985 | orchestrator | changed: [testbed-node-5] => { 2026-04-09 06:46:12.275995 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:46:12.276005 | orchestrator | } 2026-04-09 06:46:12.276016 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 06:46:12.276026 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:46:12.276036 | orchestrator | } 2026-04-09 06:46:12.276046 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 06:46:12.276056 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:46:12.276066 | orchestrator | } 2026-04-09 06:46:12.276076 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 06:46:12.276086 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:46:12.276096 | orchestrator | } 2026-04-09 06:46:12.276106 | orchestrator | 2026-04-09 06:46:12.276116 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 06:46:12.276126 | orchestrator | Thursday 09 April 2026 06:46:11 +0000 (0:00:00.904) 0:07:14.632 ******** 2026-04-09 06:46:12.276137 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:46:12.276168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:46:12.276180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:46:12.276192 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:46:12.276219 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:46:12.276236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:46:12.276247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:46:12.276259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:46:12.276277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:46:12.276295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:48:51.497120 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:48:51.497336 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:48:51.497383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:48:51.497401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:48:51.497414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:48:51.497491 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:48:51.497515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:48:51.497534 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:48:51.497553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:48:51.497574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:48:51.497596 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:48:51.497615 | orchestrator | 2026-04-09 06:48:51.497646 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-09 06:48:51.497694 | orchestrator | Thursday 09 April 2026 06:46:13 +0000 (0:00:02.122) 0:07:16.754 ******** 2026-04-09 06:48:51.497715 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:48:51.497736 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:48:51.497755 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:48:51.497782 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:48:51.497803 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:48:51.497821 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:48:51.497840 | orchestrator | 2026-04-09 06:48:51.497856 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 06:48:51.497874 | orchestrator | Thursday 09 April 2026 06:46:14 +0000 (0:00:00.653) 0:07:17.408 ******** 2026-04-09 06:48:51.497892 | orchestrator | 2026-04-09 06:48:51.497911 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 06:48:51.497934 | orchestrator | Thursday 09 April 2026 06:46:14 +0000 (0:00:00.152) 0:07:17.561 ******** 2026-04-09 06:48:51.497945 | orchestrator | 2026-04-09 06:48:51.497956 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 06:48:51.497967 | orchestrator | Thursday 09 April 2026 06:46:14 +0000 (0:00:00.337) 0:07:17.898 ******** 2026-04-09 06:48:51.497978 | orchestrator | 2026-04-09 06:48:51.497989 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 06:48:51.497999 | orchestrator | Thursday 09 April 2026 06:46:14 +0000 (0:00:00.149) 0:07:18.048 ******** 2026-04-09 06:48:51.498010 | orchestrator | 2026-04-09 06:48:51.498100 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 06:48:51.498122 | orchestrator | Thursday 09 April 2026 06:46:14 +0000 (0:00:00.146) 0:07:18.195 ******** 2026-04-09 06:48:51.498133 | orchestrator | 2026-04-09 06:48:51.498144 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 06:48:51.498155 | orchestrator | Thursday 09 April 2026 06:46:15 +0000 (0:00:00.147) 0:07:18.342 ******** 2026-04-09 06:48:51.498166 | orchestrator | 2026-04-09 06:48:51.498177 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-09 06:48:51.498187 | orchestrator | Thursday 09 April 2026 06:46:15 +0000 (0:00:00.148) 0:07:18.491 ******** 2026-04-09 06:48:51.498198 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:48:51.498209 | orchestrator | changed: [testbed-node-2] 2026-04-09 06:48:51.498220 | orchestrator | changed: [testbed-node-1] 2026-04-09 06:48:51.498231 | orchestrator | 2026-04-09 06:48:51.498242 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-09 06:48:51.498253 | orchestrator | Thursday 09 April 2026 06:46:29 +0000 (0:00:13.894) 0:07:32.385 ******** 2026-04-09 06:48:51.498264 | orchestrator | changed: [testbed-node-1] 2026-04-09 06:48:51.498275 | orchestrator | changed: [testbed-node-2] 2026-04-09 06:48:51.498286 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:48:51.498296 | orchestrator | 2026-04-09 06:48:51.498307 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-09 06:48:51.498318 | orchestrator | Thursday 09 April 2026 06:46:50 +0000 (0:00:21.138) 0:07:53.524 ******** 2026-04-09 06:48:51.498329 | orchestrator | changed: [testbed-node-3] 2026-04-09 06:48:51.498340 | orchestrator | changed: [testbed-node-5] 2026-04-09 06:48:51.498351 | orchestrator | changed: [testbed-node-4] 2026-04-09 06:48:51.498362 | orchestrator | 2026-04-09 06:48:51.498372 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-09 06:48:51.498383 | orchestrator | Thursday 09 April 2026 06:47:15 +0000 (0:00:25.611) 0:08:19.135 ******** 2026-04-09 06:48:51.498394 | orchestrator | changed: [testbed-node-3] 2026-04-09 06:48:51.498405 | orchestrator | changed: [testbed-node-4] 2026-04-09 06:48:51.498416 | orchestrator | changed: [testbed-node-5] 2026-04-09 06:48:51.498427 | orchestrator | 2026-04-09 06:48:51.498455 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-09 06:48:51.498466 | orchestrator | Thursday 09 April 2026 06:47:58 +0000 (0:00:43.113) 0:09:02.249 ******** 2026-04-09 06:48:51.498477 | orchestrator | changed: [testbed-node-3] 2026-04-09 06:48:51.498489 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-04-09 06:48:51.498501 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-04-09 06:48:51.498512 | orchestrator | changed: [testbed-node-4] 2026-04-09 06:48:51.498523 | orchestrator | changed: [testbed-node-5] 2026-04-09 06:48:51.498534 | orchestrator | 2026-04-09 06:48:51.498544 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-09 06:48:51.498555 | orchestrator | Thursday 09 April 2026 06:48:05 +0000 (0:00:06.495) 0:09:08.744 ******** 2026-04-09 06:48:51.498566 | orchestrator | changed: [testbed-node-3] 2026-04-09 06:48:51.498577 | orchestrator | changed: [testbed-node-4] 2026-04-09 06:48:51.498588 | orchestrator | changed: [testbed-node-5] 2026-04-09 06:48:51.498599 | orchestrator | 2026-04-09 06:48:51.498610 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-09 06:48:51.498621 | orchestrator | Thursday 09 April 2026 06:48:06 +0000 (0:00:00.807) 0:09:09.552 ******** 2026-04-09 06:48:51.498632 | orchestrator | changed: [testbed-node-4] 2026-04-09 06:48:51.498643 | orchestrator | changed: [testbed-node-3] 2026-04-09 06:48:51.498653 | orchestrator | changed: [testbed-node-5] 2026-04-09 06:48:51.498664 | orchestrator | 2026-04-09 06:48:51.498675 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-09 06:48:51.498687 | orchestrator | Thursday 09 April 2026 06:48:41 +0000 (0:00:35.237) 0:09:44.789 ******** 2026-04-09 06:48:51.498705 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:48:51.498716 | orchestrator | 2026-04-09 06:48:51.498727 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-09 06:48:51.498738 | orchestrator | Thursday 09 April 2026 06:48:41 +0000 (0:00:00.507) 0:09:45.297 ******** 2026-04-09 06:48:51.498748 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:48:51.498759 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:48:51.498770 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:48:51.498781 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:48:51.498791 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:48:51.498803 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-09 06:48:51.498814 | orchestrator | 2026-04-09 06:48:51.498825 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-09 06:48:51.498845 | orchestrator | Thursday 09 April 2026 06:48:51 +0000 (0:00:09.528) 0:09:54.826 ******** 2026-04-09 06:49:44.318232 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:49:44.318359 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:49:44.318376 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:49:44.318388 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:49:44.318400 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:49:44.318461 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:49:44.318475 | orchestrator | 2026-04-09 06:49:44.318488 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-09 06:49:44.318501 | orchestrator | Thursday 09 April 2026 06:49:01 +0000 (0:00:10.333) 0:10:05.159 ******** 2026-04-09 06:49:44.318512 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:49:44.318523 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:49:44.318551 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:49:44.318563 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:49:44.318574 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:49:44.318585 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-04-09 06:49:44.318597 | orchestrator | 2026-04-09 06:49:44.318608 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-09 06:49:44.318619 | orchestrator | Thursday 09 April 2026 06:49:05 +0000 (0:00:03.817) 0:10:08.976 ******** 2026-04-09 06:49:44.318631 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-09 06:49:44.318642 | orchestrator | 2026-04-09 06:49:44.318653 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-09 06:49:44.318663 | orchestrator | Thursday 09 April 2026 06:49:19 +0000 (0:00:13.412) 0:10:22.389 ******** 2026-04-09 06:49:44.318674 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-09 06:49:44.318685 | orchestrator | 2026-04-09 06:49:44.318696 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-09 06:49:44.318707 | orchestrator | Thursday 09 April 2026 06:49:20 +0000 (0:00:01.910) 0:10:24.300 ******** 2026-04-09 06:49:44.318718 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:49:44.318729 | orchestrator | 2026-04-09 06:49:44.318740 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-09 06:49:44.318751 | orchestrator | Thursday 09 April 2026 06:49:22 +0000 (0:00:01.570) 0:10:25.870 ******** 2026-04-09 06:49:44.318764 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-09 06:49:44.318777 | orchestrator | 2026-04-09 06:49:44.318790 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-09 06:49:44.318802 | orchestrator | 2026-04-09 06:49:44.318814 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-09 06:49:44.318827 | orchestrator | Thursday 09 April 2026 06:49:35 +0000 (0:00:13.308) 0:10:39.179 ******** 2026-04-09 06:49:44.318840 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:49:44.318853 | orchestrator | changed: [testbed-node-1] 2026-04-09 06:49:44.318865 | orchestrator | changed: [testbed-node-2] 2026-04-09 06:49:44.318878 | orchestrator | 2026-04-09 06:49:44.318912 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-09 06:49:44.318926 | orchestrator | 2026-04-09 06:49:44.318938 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-09 06:49:44.318951 | orchestrator | Thursday 09 April 2026 06:49:37 +0000 (0:00:01.309) 0:10:40.488 ******** 2026-04-09 06:49:44.318963 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:49:44.318976 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:49:44.318988 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:49:44.319001 | orchestrator | 2026-04-09 06:49:44.319014 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-09 06:49:44.319026 | orchestrator | 2026-04-09 06:49:44.319039 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-09 06:49:44.319052 | orchestrator | Thursday 09 April 2026 06:49:38 +0000 (0:00:01.200) 0:10:41.689 ******** 2026-04-09 06:49:44.319064 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-09 06:49:44.319078 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-09 06:49:44.319091 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-09 06:49:44.319104 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-09 06:49:44.319116 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-09 06:49:44.319127 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-09 06:49:44.319137 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:49:44.319149 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-09 06:49:44.319159 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-09 06:49:44.319170 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-09 06:49:44.319181 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-09 06:49:44.319192 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-09 06:49:44.319203 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-09 06:49:44.319214 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:49:44.319225 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-09 06:49:44.319236 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-09 06:49:44.319247 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-09 06:49:44.319257 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-09 06:49:44.319268 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-09 06:49:44.319279 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-09 06:49:44.319290 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:49:44.319301 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-09 06:49:44.319312 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-09 06:49:44.319323 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-09 06:49:44.319334 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-09 06:49:44.319361 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-09 06:49:44.319373 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-09 06:49:44.319384 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:49:44.319396 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-09 06:49:44.319407 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-09 06:49:44.319439 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-09 06:49:44.319450 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-09 06:49:44.319461 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-09 06:49:44.319477 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-09 06:49:44.319489 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:49:44.319508 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-09 06:49:44.319519 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-09 06:49:44.319530 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-09 06:49:44.319541 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-09 06:49:44.319552 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-09 06:49:44.319563 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-09 06:49:44.319574 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:49:44.319585 | orchestrator | 2026-04-09 06:49:44.319596 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-09 06:49:44.319607 | orchestrator | 2026-04-09 06:49:44.319618 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-09 06:49:44.319629 | orchestrator | Thursday 09 April 2026 06:49:40 +0000 (0:00:01.859) 0:10:43.548 ******** 2026-04-09 06:49:44.319640 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-09 06:49:44.319651 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-09 06:49:44.319662 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:49:44.319673 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-09 06:49:44.319684 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-09 06:49:44.319695 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:49:44.319706 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-09 06:49:44.319717 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-09 06:49:44.319728 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:49:44.319739 | orchestrator | 2026-04-09 06:49:44.319750 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-09 06:49:44.319761 | orchestrator | 2026-04-09 06:49:44.319772 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-09 06:49:44.319783 | orchestrator | Thursday 09 April 2026 06:49:41 +0000 (0:00:01.151) 0:10:44.700 ******** 2026-04-09 06:49:44.319793 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:49:44.319805 | orchestrator | 2026-04-09 06:49:44.319816 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-09 06:49:44.319827 | orchestrator | 2026-04-09 06:49:44.319837 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-09 06:49:44.319848 | orchestrator | Thursday 09 April 2026 06:49:42 +0000 (0:00:01.245) 0:10:45.945 ******** 2026-04-09 06:49:44.319860 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:49:44.319871 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:49:44.319882 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:49:44.319893 | orchestrator | 2026-04-09 06:49:44.319904 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 06:49:44.319916 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 06:49:44.319930 | orchestrator | testbed-node-0 : ok=58  changed=25  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-09 06:49:44.319941 | orchestrator | testbed-node-1 : ok=31  changed=21  unreachable=0 failed=0 skipped=61  rescued=0 ignored=0 2026-04-09 06:49:44.319954 | orchestrator | testbed-node-2 : ok=31  changed=21  unreachable=0 failed=0 skipped=61  rescued=0 ignored=0 2026-04-09 06:49:44.319971 | orchestrator | testbed-node-3 : ok=49  changed=15  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-09 06:49:44.319989 | orchestrator | testbed-node-4 : ok=43  changed=14  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-09 06:49:44.320016 | orchestrator | testbed-node-5 : ok=48  changed=14  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-09 06:49:44.320035 | orchestrator | 2026-04-09 06:49:44.320053 | orchestrator | 2026-04-09 06:49:44.320071 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 06:49:44.320089 | orchestrator | Thursday 09 April 2026 06:49:44 +0000 (0:00:01.692) 0:10:47.638 ******** 2026-04-09 06:49:44.320100 | orchestrator | =============================================================================== 2026-04-09 06:49:44.320111 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 43.11s 2026-04-09 06:49:44.320121 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 36.44s 2026-04-09 06:49:44.320132 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 35.24s 2026-04-09 06:49:44.320152 | orchestrator | nova-cell : Get new Libvirt version ------------------------------------ 33.19s 2026-04-09 06:49:44.790496 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 26.47s 2026-04-09 06:49:44.790593 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 25.61s 2026-04-09 06:49:44.790608 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 21.14s 2026-04-09 06:49:44.790619 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.59s 2026-04-09 06:49:44.790630 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.20s 2026-04-09 06:49:44.790661 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 13.89s 2026-04-09 06:49:44.790673 | orchestrator | nova-cell : Update cell ------------------------------------------------ 13.49s 2026-04-09 06:49:44.790684 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.41s 2026-04-09 06:49:44.790695 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.31s 2026-04-09 06:49:44.790706 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------ 12.73s 2026-04-09 06:49:44.790717 | orchestrator | nova : Restart nova-api container -------------------------------------- 12.60s 2026-04-09 06:49:44.790728 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 12.46s 2026-04-09 06:49:44.790738 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.47s 2026-04-09 06:49:44.790749 | orchestrator | nova : Restart nova-metadata container --------------------------------- 10.54s 2026-04-09 06:49:44.790760 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.33s 2026-04-09 06:49:44.790771 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves ---- 9.53s 2026-04-09 06:49:44.969958 | orchestrator | + osism apply nova-update-cell-mappings 2026-04-09 06:49:56.470262 | orchestrator | 2026-04-09 06:49:56 | INFO  | Prepare task for execution of nova-update-cell-mappings. 2026-04-09 06:49:56.547574 | orchestrator | 2026-04-09 06:49:56 | INFO  | Task b68201cd-8834-413d-bbfe-faedeabdaf63 (nova-update-cell-mappings) was prepared for execution. 2026-04-09 06:49:56.547663 | orchestrator | 2026-04-09 06:49:56 | INFO  | It takes a moment until task b68201cd-8834-413d-bbfe-faedeabdaf63 (nova-update-cell-mappings) has been started and output is visible here. 2026-04-09 06:50:28.708984 | orchestrator | 2026-04-09 06:50:28.709140 | orchestrator | PLAY [Update Nova cell mappings] *********************************************** 2026-04-09 06:50:28.709169 | orchestrator | 2026-04-09 06:50:28.709190 | orchestrator | TASK [Get list of Nova cells] ************************************************** 2026-04-09 06:50:28.709210 | orchestrator | Thursday 09 April 2026 06:50:01 +0000 (0:00:01.485) 0:00:01.485 ******** 2026-04-09 06:50:28.709227 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:50:28.709246 | orchestrator | 2026-04-09 06:50:28.709264 | orchestrator | TASK [Parse cell information] ************************************************** 2026-04-09 06:50:28.709282 | orchestrator | Thursday 09 April 2026 06:50:16 +0000 (0:00:15.094) 0:00:16.579 ******** 2026-04-09 06:50:28.709337 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:50:28.709358 | orchestrator | 2026-04-09 06:50:28.709375 | orchestrator | TASK [Display cells to update] ************************************************* 2026-04-09 06:50:28.709427 | orchestrator | Thursday 09 April 2026 06:50:17 +0000 (0:00:01.140) 0:00:17.720 ******** 2026-04-09 06:50:28.709448 | orchestrator | ok: [testbed-node-0] => { 2026-04-09 06:50:28.709468 | orchestrator |  "msg": "Cells to update: [{'name': '', 'uuid': '613486d6-ea5a-41fb-a0c8-707706528bba'}]" 2026-04-09 06:50:28.709487 | orchestrator | } 2026-04-09 06:50:28.709504 | orchestrator | 2026-04-09 06:50:28.709522 | orchestrator | TASK [Use specified cell UUID if provided] ************************************* 2026-04-09 06:50:28.709539 | orchestrator | Thursday 09 April 2026 06:50:18 +0000 (0:00:01.085) 0:00:18.805 ******** 2026-04-09 06:50:28.709555 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:50:28.709573 | orchestrator | 2026-04-09 06:50:28.709590 | orchestrator | TASK [Abort if multiple cells found without specific UUID and abort_on_multiple is enabled] *** 2026-04-09 06:50:28.709609 | orchestrator | Thursday 09 April 2026 06:50:20 +0000 (0:00:01.137) 0:00:19.943 ******** 2026-04-09 06:50:28.709628 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:50:28.709646 | orchestrator | 2026-04-09 06:50:28.709663 | orchestrator | TASK [Update Nova cell mappings] *********************************************** 2026-04-09 06:50:28.709680 | orchestrator | Thursday 09 April 2026 06:50:21 +0000 (0:00:01.117) 0:00:21.061 ******** 2026-04-09 06:50:28.709697 | orchestrator | changed: [testbed-node-0] => (item=613486d6-ea5a-41fb-a0c8-707706528bba) 2026-04-09 06:50:28.709716 | orchestrator | 2026-04-09 06:50:28.709734 | orchestrator | TASK [Display update results] ************************************************** 2026-04-09 06:50:28.709751 | orchestrator | Thursday 09 April 2026 06:50:26 +0000 (0:00:05.698) 0:00:26.759 ******** 2026-04-09 06:50:28.709770 | orchestrator | ok: [testbed-node-0] => (item=613486d6-ea5a-41fb-a0c8-707706528bba) => { 2026-04-09 06:50:28.709788 | orchestrator |  "msg": "Cell 613486d6-ea5a-41fb-a0c8-707706528bba updated successfully" 2026-04-09 06:50:28.709806 | orchestrator | } 2026-04-09 06:50:28.709825 | orchestrator | 2026-04-09 06:50:28.709843 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 06:50:28.709862 | orchestrator | testbed-node-0 : ok=5  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 06:50:28.709882 | orchestrator | 2026-04-09 06:50:28.709900 | orchestrator | 2026-04-09 06:50:28.709916 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 06:50:28.709934 | orchestrator | Thursday 09 April 2026 06:50:28 +0000 (0:00:01.554) 0:00:28.314 ******** 2026-04-09 06:50:28.709951 | orchestrator | =============================================================================== 2026-04-09 06:50:28.709969 | orchestrator | Get list of Nova cells ------------------------------------------------- 15.09s 2026-04-09 06:50:28.709987 | orchestrator | Update Nova cell mappings ----------------------------------------------- 5.70s 2026-04-09 06:50:28.710005 | orchestrator | Display update results -------------------------------------------------- 1.55s 2026-04-09 06:50:28.710107 | orchestrator | Parse cell information -------------------------------------------------- 1.14s 2026-04-09 06:50:28.710125 | orchestrator | Use specified cell UUID if provided ------------------------------------- 1.14s 2026-04-09 06:50:28.710165 | orchestrator | Abort if multiple cells found without specific UUID and abort_on_multiple is enabled --- 1.12s 2026-04-09 06:50:28.710186 | orchestrator | Display cells to update ------------------------------------------------- 1.09s 2026-04-09 06:50:28.885166 | orchestrator | + osism apply -a upgrade nova 2026-04-09 06:50:30.282508 | orchestrator | 2026-04-09 06:50:30 | INFO  | Prepare task for execution of nova. 2026-04-09 06:50:30.372283 | orchestrator | 2026-04-09 06:50:30 | INFO  | Task 9a26f0e7-c904-4200-8010-dda37d338b7e (nova) was prepared for execution. 2026-04-09 06:50:30.372349 | orchestrator | 2026-04-09 06:50:30 | INFO  | It takes a moment until task 9a26f0e7-c904-4200-8010-dda37d338b7e (nova) has been started and output is visible here. 2026-04-09 06:51:45.944661 | orchestrator | 2026-04-09 06:51:45.944778 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 06:51:45.944795 | orchestrator | 2026-04-09 06:51:45.944808 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-09 06:51:45.944820 | orchestrator | Thursday 09 April 2026 06:50:36 +0000 (0:00:02.308) 0:00:02.308 ******** 2026-04-09 06:51:45.944831 | orchestrator | changed: [testbed-manager] 2026-04-09 06:51:45.944844 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:51:45.944855 | orchestrator | changed: [testbed-node-1] 2026-04-09 06:51:45.944866 | orchestrator | changed: [testbed-node-2] 2026-04-09 06:51:45.944877 | orchestrator | changed: [testbed-node-3] 2026-04-09 06:51:45.944888 | orchestrator | changed: [testbed-node-4] 2026-04-09 06:51:45.944899 | orchestrator | changed: [testbed-node-5] 2026-04-09 06:51:45.944910 | orchestrator | 2026-04-09 06:51:45.944921 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 06:51:45.944932 | orchestrator | Thursday 09 April 2026 06:50:39 +0000 (0:00:03.374) 0:00:05.682 ******** 2026-04-09 06:51:45.944943 | orchestrator | changed: [testbed-manager] 2026-04-09 06:51:45.944954 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:51:45.944965 | orchestrator | changed: [testbed-node-1] 2026-04-09 06:51:45.944976 | orchestrator | changed: [testbed-node-2] 2026-04-09 06:51:45.944987 | orchestrator | changed: [testbed-node-3] 2026-04-09 06:51:45.944997 | orchestrator | changed: [testbed-node-4] 2026-04-09 06:51:45.945008 | orchestrator | changed: [testbed-node-5] 2026-04-09 06:51:45.945020 | orchestrator | 2026-04-09 06:51:45.945031 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 06:51:45.945042 | orchestrator | Thursday 09 April 2026 06:50:41 +0000 (0:00:02.078) 0:00:07.761 ******** 2026-04-09 06:51:45.945053 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-09 06:51:45.945064 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-09 06:51:45.945075 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-09 06:51:45.945086 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-09 06:51:45.945097 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-09 06:51:45.945108 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-09 06:51:45.945119 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-09 06:51:45.945129 | orchestrator | 2026-04-09 06:51:45.945140 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-09 06:51:45.945151 | orchestrator | 2026-04-09 06:51:45.945163 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-09 06:51:45.945174 | orchestrator | Thursday 09 April 2026 06:50:44 +0000 (0:00:02.862) 0:00:10.624 ******** 2026-04-09 06:51:45.945184 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:51:45.945195 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:51:45.945207 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:51:45.945218 | orchestrator | 2026-04-09 06:51:45.945229 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-09 06:51:45.945240 | orchestrator | Thursday 09 April 2026 06:50:47 +0000 (0:00:03.117) 0:00:13.741 ******** 2026-04-09 06:51:45.945251 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 06:51:45.945262 | orchestrator | 2026-04-09 06:51:45.945273 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-09 06:51:45.945283 | orchestrator | Thursday 09 April 2026 06:50:50 +0000 (0:00:02.464) 0:00:16.206 ******** 2026-04-09 06:51:45.945294 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:51:45.945306 | orchestrator | 2026-04-09 06:51:45.945317 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-09 06:51:45.945328 | orchestrator | Thursday 09 April 2026 06:50:52 +0000 (0:00:02.041) 0:00:18.248 ******** 2026-04-09 06:51:45.945339 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:51:45.945413 | orchestrator | 2026-04-09 06:51:45.945427 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-09 06:51:45.945438 | orchestrator | Thursday 09 April 2026 06:50:54 +0000 (0:00:02.164) 0:00:20.412 ******** 2026-04-09 06:51:45.945449 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:51:45.945460 | orchestrator | 2026-04-09 06:51:45.945471 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-09 06:51:45.945482 | orchestrator | Thursday 09 April 2026 06:50:58 +0000 (0:00:04.024) 0:00:24.437 ******** 2026-04-09 06:51:45.945493 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:51:45.945503 | orchestrator | 2026-04-09 06:51:45.945514 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-09 06:51:45.945525 | orchestrator | 2026-04-09 06:51:45.945536 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-09 06:51:45.945547 | orchestrator | Thursday 09 April 2026 06:51:18 +0000 (0:00:19.958) 0:00:44.395 ******** 2026-04-09 06:51:45.945558 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:51:45.945569 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:51:45.945579 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:51:45.945590 | orchestrator | 2026-04-09 06:51:45.945601 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-09 06:51:45.945612 | orchestrator | Thursday 09 April 2026 06:51:20 +0000 (0:00:01.639) 0:00:46.034 ******** 2026-04-09 06:51:45.945623 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 06:51:45.945633 | orchestrator | 2026-04-09 06:51:45.945659 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-09 06:51:45.945670 | orchestrator | Thursday 09 April 2026 06:51:21 +0000 (0:00:01.947) 0:00:47.982 ******** 2026-04-09 06:51:45.945681 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:51:45.945692 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:51:45.945703 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:51:45.945713 | orchestrator | 2026-04-09 06:51:45.945724 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-09 06:51:45.945735 | orchestrator | Thursday 09 April 2026 06:51:23 +0000 (0:00:01.695) 0:00:49.677 ******** 2026-04-09 06:51:45.945746 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:51:45.945757 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:51:45.945768 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:51:45.945779 | orchestrator | 2026-04-09 06:51:45.945807 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-09 06:51:45.945819 | orchestrator | Thursday 09 April 2026 06:51:25 +0000 (0:00:02.021) 0:00:51.699 ******** 2026-04-09 06:51:45.945830 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:51:45.945841 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:51:45.945852 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:51:45.945863 | orchestrator | 2026-04-09 06:51:45.945874 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-09 06:51:45.945885 | orchestrator | Thursday 09 April 2026 06:51:29 +0000 (0:00:03.638) 0:00:55.337 ******** 2026-04-09 06:51:45.945896 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:51:45.945907 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:51:45.945918 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:51:45.945929 | orchestrator | 2026-04-09 06:51:45.945940 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-09 06:51:45.945951 | orchestrator | 2026-04-09 06:51:45.945962 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-09 06:51:45.945973 | orchestrator | Thursday 09 April 2026 06:51:42 +0000 (0:00:13.386) 0:01:08.723 ******** 2026-04-09 06:51:45.945983 | orchestrator | included: /ansible/roles/nova/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 06:51:45.945996 | orchestrator | 2026-04-09 06:51:45.946007 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-09 06:51:45.946081 | orchestrator | Thursday 09 April 2026 06:51:44 +0000 (0:00:01.944) 0:01:10.667 ******** 2026-04-09 06:51:45.946111 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:51:45.946129 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:51:45.946157 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:51:57.663638 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:51:57.663760 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:51:57.663776 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:51:57.663800 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:51:57.663826 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:51:57.663837 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:51:57.663854 | orchestrator | 2026-04-09 06:51:57.663865 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-09 06:51:57.663875 | orchestrator | Thursday 09 April 2026 06:51:47 +0000 (0:00:03.192) 0:01:13.860 ******** 2026-04-09 06:51:57.663885 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:51:57.663895 | orchestrator | 2026-04-09 06:51:57.663905 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-09 06:51:57.663914 | orchestrator | Thursday 09 April 2026 06:51:49 +0000 (0:00:01.133) 0:01:14.993 ******** 2026-04-09 06:51:57.663923 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:51:57.663932 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:51:57.663941 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:51:57.663950 | orchestrator | 2026-04-09 06:51:57.663959 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-09 06:51:57.663968 | orchestrator | Thursday 09 April 2026 06:51:50 +0000 (0:00:01.616) 0:01:16.610 ******** 2026-04-09 06:51:57.663977 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 06:51:57.663986 | orchestrator | 2026-04-09 06:51:57.663995 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-09 06:51:57.664004 | orchestrator | Thursday 09 April 2026 06:51:52 +0000 (0:00:02.177) 0:01:18.787 ******** 2026-04-09 06:51:57.664013 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:51:57.664022 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:51:57.664031 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:51:57.664040 | orchestrator | 2026-04-09 06:51:57.664049 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-09 06:51:57.664058 | orchestrator | Thursday 09 April 2026 06:51:54 +0000 (0:00:01.391) 0:01:20.178 ******** 2026-04-09 06:51:57.664067 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 06:51:57.664077 | orchestrator | 2026-04-09 06:51:57.664086 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-09 06:51:57.664095 | orchestrator | Thursday 09 April 2026 06:51:56 +0000 (0:00:02.071) 0:01:22.250 ******** 2026-04-09 06:51:57.664109 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:51:57.664127 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:52:01.191046 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:52:01.191150 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:52:01.191185 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:52:01.191219 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:52:01.191254 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:52:01.191268 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:52:01.191280 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:52:01.191293 | orchestrator | 2026-04-09 06:52:01.191306 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-09 06:52:01.191318 | orchestrator | Thursday 09 April 2026 06:52:00 +0000 (0:00:04.469) 0:01:26.720 ******** 2026-04-09 06:52:01.191336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:52:01.191425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:52:03.151743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:52:03.151856 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:52:03.151877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:52:03.151894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:52:03.151925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:52:03.151958 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:52:03.151991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:52:03.152005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:52:03.152017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:52:03.152029 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:52:03.152040 | orchestrator | 2026-04-09 06:52:03.152053 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-09 06:52:03.152066 | orchestrator | Thursday 09 April 2026 06:52:02 +0000 (0:00:01.918) 0:01:28.639 ******** 2026-04-09 06:52:03.152083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:52:03.152113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:52:06.284276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:52:06.284470 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:52:06.284503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:52:06.284542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:52:06.284580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:52:06.284592 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:52:06.284626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:52:06.284640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:52:06.284652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:52:06.284671 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:52:06.284684 | orchestrator | 2026-04-09 06:52:06.284705 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-09 06:52:06.284725 | orchestrator | Thursday 09 April 2026 06:52:04 +0000 (0:00:02.167) 0:01:30.807 ******** 2026-04-09 06:52:06.284751 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:52:06.284784 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:52:12.513080 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:52:12.513208 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:52:12.513251 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:52:12.513284 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:52:12.513298 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:52:12.513312 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:52:12.513331 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:52:12.513449 | orchestrator | 2026-04-09 06:52:12.513467 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-09 06:52:12.513486 | orchestrator | Thursday 09 April 2026 06:52:09 +0000 (0:00:04.420) 0:01:35.227 ******** 2026-04-09 06:52:12.513499 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:52:12.513521 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:52:19.124968 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:52:19.125109 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:52:19.125127 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:52:19.125156 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:52:19.125169 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:52:19.125188 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:52:19.125204 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:52:19.125215 | orchestrator | 2026-04-09 06:52:19.125227 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-09 06:52:19.125255 | orchestrator | Thursday 09 April 2026 06:52:18 +0000 (0:00:09.411) 0:01:44.638 ******** 2026-04-09 06:52:19.125277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:52:19.125295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:52:31.228507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:52:31.228665 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:52:31.228717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:52:31.228742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:52:31.228764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:52:31.228784 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:52:31.228829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:52:31.228873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:52:31.228903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:52:31.228924 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:52:31.228943 | orchestrator | 2026-04-09 06:52:31.228964 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-09 06:52:31.228986 | orchestrator | Thursday 09 April 2026 06:52:20 +0000 (0:00:02.107) 0:01:46.747 ******** 2026-04-09 06:52:31.229005 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:52:31.229023 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:52:31.229036 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:52:31.229050 | orchestrator | 2026-04-09 06:52:31.229063 | orchestrator | TASK [nova : Copying over nova-metadata-wsgi.conf] ***************************** 2026-04-09 06:52:31.229076 | orchestrator | Thursday 09 April 2026 06:52:22 +0000 (0:00:02.041) 0:01:48.788 ******** 2026-04-09 06:52:31.229089 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:52:31.229102 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:52:31.229115 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:52:31.229127 | orchestrator | 2026-04-09 06:52:31.229140 | orchestrator | TASK [nova : Copying over vendordata file for nova services] ******************* 2026-04-09 06:52:31.229158 | orchestrator | Thursday 09 April 2026 06:52:24 +0000 (0:00:01.780) 0:01:50.568 ******** 2026-04-09 06:52:31.229179 | orchestrator | skipping: [testbed-node-0] => (item=nova-metadata)  2026-04-09 06:52:31.229199 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-09 06:52:31.229217 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:52:31.229237 | orchestrator | skipping: [testbed-node-1] => (item=nova-metadata)  2026-04-09 06:52:31.229249 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-09 06:52:31.229259 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:52:31.229270 | orchestrator | skipping: [testbed-node-2] => (item=nova-metadata)  2026-04-09 06:52:31.229281 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-09 06:52:31.229291 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:52:31.229310 | orchestrator | 2026-04-09 06:52:31.229321 | orchestrator | TASK [Configure uWSGI for Nova] ************************************************ 2026-04-09 06:52:31.229357 | orchestrator | Thursday 09 April 2026 06:52:26 +0000 (0:00:01.443) 0:01:52.011 ******** 2026-04-09 06:52:31.229369 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-api', 'port': '8774', 'workers': '2'}) 2026-04-09 06:52:31.229383 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-metadata', 'port': '8775', 'workers': '2'}) 2026-04-09 06:52:31.229396 | orchestrator | 2026-04-09 06:52:31.229415 | orchestrator | TASK [service-uwsgi-config : Copying over nova-api uWSGI config] *************** 2026-04-09 06:52:31.229430 | orchestrator | Thursday 09 April 2026 06:52:29 +0000 (0:00:03.115) 0:01:55.127 ******** 2026-04-09 06:52:31.229445 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:52:31.229473 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:52:31.229493 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:52:31.229512 | orchestrator | 2026-04-09 06:52:57.506002 | orchestrator | TASK [service-uwsgi-config : Copying over nova-metadata uWSGI config] ********** 2026-04-09 06:52:57.506180 | orchestrator | Thursday 09 April 2026 06:52:31 +0000 (0:00:02.861) 0:01:57.988 ******** 2026-04-09 06:52:57.506198 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:52:57.506212 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:52:57.506223 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:52:57.506235 | orchestrator | 2026-04-09 06:52:57.506247 | orchestrator | TASK [nova : Run Nova upgrade checks] ****************************************** 2026-04-09 06:52:57.506258 | orchestrator | Thursday 09 April 2026 06:52:35 +0000 (0:00:03.567) 0:02:01.556 ******** 2026-04-09 06:52:57.506270 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:52:57.506282 | orchestrator | 2026-04-09 06:52:57.506294 | orchestrator | TASK [nova : Upgrade status check result] ************************************** 2026-04-09 06:52:57.506305 | orchestrator | Thursday 09 April 2026 06:52:54 +0000 (0:00:19.429) 0:02:20.986 ******** 2026-04-09 06:52:57.506359 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:52:57.506372 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:52:57.506383 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:52:57.506395 | orchestrator | 2026-04-09 06:52:57.506406 | orchestrator | TASK [nova : Stopping top level nova services] ********************************* 2026-04-09 06:52:57.506417 | orchestrator | Thursday 09 April 2026 06:52:56 +0000 (0:00:01.529) 0:02:22.515 ******** 2026-04-09 06:52:57.506434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:52:57.506453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:52:57.506493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:52:57.506507 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:52:57.506539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:52:57.506597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:52:57.506613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:52:57.506635 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:52:57.506650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:52:57.506674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:53:02.812798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:53:02.812910 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:53:02.812929 | orchestrator | 2026-04-09 06:53:02.812943 | orchestrator | TASK [service-check-containers : nova | Check containers] ********************** 2026-04-09 06:53:02.812957 | orchestrator | Thursday 09 April 2026 06:52:58 +0000 (0:00:02.398) 0:02:24.914 ******** 2026-04-09 06:53:02.812987 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:53:02.813022 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:53:02.813037 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:53:02.813071 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:53:02.813090 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:53:02.813112 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 06:53:02.813126 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:53:02.813148 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:53:06.277810 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 06:53:06.277914 | orchestrator | 2026-04-09 06:53:06.277931 | orchestrator | TASK [service-check-containers : nova | Notify handlers to restart containers] *** 2026-04-09 06:53:06.277944 | orchestrator | Thursday 09 April 2026 06:53:03 +0000 (0:00:05.053) 0:02:29.967 ******** 2026-04-09 06:53:06.277957 | orchestrator | ok: [testbed-node-0] => { 2026-04-09 06:53:06.277969 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:53:06.277980 | orchestrator | } 2026-04-09 06:53:06.277993 | orchestrator | ok: [testbed-node-1] => { 2026-04-09 06:53:06.278094 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:53:06.278110 | orchestrator | } 2026-04-09 06:53:06.278121 | orchestrator | ok: [testbed-node-2] => { 2026-04-09 06:53:06.278148 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:53:06.278159 | orchestrator | } 2026-04-09 06:53:06.278171 | orchestrator | 2026-04-09 06:53:06.278182 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 06:53:06.278193 | orchestrator | Thursday 09 April 2026 06:53:05 +0000 (0:00:01.357) 0:02:31.324 ******** 2026-04-09 06:53:06.278206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:53:06.278222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:53:06.278236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:53:06.278248 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:53:06.278288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:53:06.278310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:53:06.278356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:53:06.278371 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:53:06.278385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:53:06.278409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 06:53:48.789450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 06:53:48.789569 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:53:48.789588 | orchestrator | 2026-04-09 06:53:48.789601 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-09 06:53:48.789614 | orchestrator | Thursday 09 April 2026 06:53:07 +0000 (0:00:02.213) 0:02:33.537 ******** 2026-04-09 06:53:48.789625 | orchestrator | 2026-04-09 06:53:48.789636 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-09 06:53:48.789647 | orchestrator | Thursday 09 April 2026 06:53:08 +0000 (0:00:00.528) 0:02:34.066 ******** 2026-04-09 06:53:48.789658 | orchestrator | 2026-04-09 06:53:48.789669 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-09 06:53:48.789680 | orchestrator | Thursday 09 April 2026 06:53:08 +0000 (0:00:00.508) 0:02:34.575 ******** 2026-04-09 06:53:48.789691 | orchestrator | 2026-04-09 06:53:48.789702 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-09 06:53:48.789712 | orchestrator | 2026-04-09 06:53:48.789723 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-09 06:53:48.789734 | orchestrator | Thursday 09 April 2026 06:53:10 +0000 (0:00:01.623) 0:02:36.198 ******** 2026-04-09 06:53:48.789746 | orchestrator | included: /ansible/roles/nova-cell/tasks/upgrade.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 06:53:48.789759 | orchestrator | 2026-04-09 06:53:48.789771 | orchestrator | TASK [nova-cell : Get new Libvirt version] ************************************* 2026-04-09 06:53:48.789782 | orchestrator | Thursday 09 April 2026 06:53:12 +0000 (0:00:02.616) 0:02:38.814 ******** 2026-04-09 06:53:48.789793 | orchestrator | changed: [testbed-node-3] 2026-04-09 06:53:48.789804 | orchestrator | 2026-04-09 06:53:48.789815 | orchestrator | TASK [nova-cell : Cache new Libvirt version] *********************************** 2026-04-09 06:53:48.789826 | orchestrator | Thursday 09 April 2026 06:53:17 +0000 (0:00:04.460) 0:02:43.275 ******** 2026-04-09 06:53:48.789837 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:53:48.789849 | orchestrator | 2026-04-09 06:53:48.789860 | orchestrator | TASK [Get nova_libvirt image info] ********************************************* 2026-04-09 06:53:48.789873 | orchestrator | Thursday 09 April 2026 06:53:19 +0000 (0:00:02.263) 0:02:45.539 ******** 2026-04-09 06:53:48.789886 | orchestrator | included: service-image-info for testbed-node-3 2026-04-09 06:53:48.789925 | orchestrator | 2026-04-09 06:53:48.789939 | orchestrator | TASK [service-image-info : community.docker.docker_image_info] ***************** 2026-04-09 06:53:48.789952 | orchestrator | Thursday 09 April 2026 06:53:21 +0000 (0:00:02.064) 0:02:47.603 ******** 2026-04-09 06:53:48.789964 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:53:48.789977 | orchestrator | 2026-04-09 06:53:48.789990 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-09 06:53:48.790002 | orchestrator | Thursday 09 April 2026 06:53:26 +0000 (0:00:04.459) 0:02:52.063 ******** 2026-04-09 06:53:48.790072 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:53:48.790086 | orchestrator | 2026-04-09 06:53:48.790099 | orchestrator | TASK [service-image-info : containers.podman.podman_image_info] **************** 2026-04-09 06:53:48.790211 | orchestrator | Thursday 09 April 2026 06:53:29 +0000 (0:00:03.012) 0:02:55.075 ******** 2026-04-09 06:53:48.790228 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:53:48.790241 | orchestrator | 2026-04-09 06:53:48.790253 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-09 06:53:48.790265 | orchestrator | Thursday 09 April 2026 06:53:32 +0000 (0:00:03.028) 0:02:58.103 ******** 2026-04-09 06:53:48.790276 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:53:48.790287 | orchestrator | 2026-04-09 06:53:48.790320 | orchestrator | TASK [nova-cell : Get container facts] ***************************************** 2026-04-09 06:53:48.790332 | orchestrator | Thursday 09 April 2026 06:53:35 +0000 (0:00:02.990) 0:03:01.094 ******** 2026-04-09 06:53:48.790343 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:53:48.790355 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:53:48.790366 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:53:48.790377 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:53:48.790388 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:53:48.790399 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:53:48.790411 | orchestrator | 2026-04-09 06:53:48.790422 | orchestrator | TASK [nova-cell : Get current Libvirt version] ********************************* 2026-04-09 06:53:48.790433 | orchestrator | Thursday 09 April 2026 06:53:39 +0000 (0:00:04.857) 0:03:05.951 ******** 2026-04-09 06:53:48.790444 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:53:48.790455 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:53:48.790467 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:53:48.790478 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:53:48.790489 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:53:48.790500 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:53:48.790511 | orchestrator | 2026-04-09 06:53:48.790522 | orchestrator | TASK [nova-cell : Check that the new Libvirt version is >= current] ************ 2026-04-09 06:53:48.790533 | orchestrator | Thursday 09 April 2026 06:53:44 +0000 (0:00:04.775) 0:03:10.726 ******** 2026-04-09 06:53:48.790545 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:53:48.790556 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:53:48.790567 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:53:48.790578 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:53:48.790589 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:53:48.790621 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:53:48.790633 | orchestrator | 2026-04-09 06:53:48.790645 | orchestrator | TASK [nova-cell : Stopping nova cell services] ********************************* 2026-04-09 06:53:48.790656 | orchestrator | Thursday 09 April 2026 06:53:47 +0000 (0:00:03.150) 0:03:13.877 ******** 2026-04-09 06:53:48.790678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:53:48.790692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:53:48.790714 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:53:48.790727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:53:48.790739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:53:48.790765 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:53:59.616058 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:53:59.616178 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:53:59.616197 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:53:59.616238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:53:59.616253 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:53:59.616266 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:53:59.616279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:53:59.616372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:53:59.616386 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:53:59.616418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:53:59.616431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:53:59.616450 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:53:59.616462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:53:59.616474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:53:59.616485 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:53:59.616496 | orchestrator | 2026-04-09 06:53:59.616509 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-09 06:53:59.616521 | orchestrator | Thursday 09 April 2026 06:53:51 +0000 (0:00:03.285) 0:03:17.162 ******** 2026-04-09 06:53:59.616533 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:53:59.616544 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:53:59.616556 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:53:59.616568 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 06:53:59.616580 | orchestrator | 2026-04-09 06:53:59.616593 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-09 06:53:59.616607 | orchestrator | Thursday 09 April 2026 06:53:53 +0000 (0:00:02.139) 0:03:19.302 ******** 2026-04-09 06:53:59.616625 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-09 06:53:59.616644 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-09 06:53:59.616664 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-09 06:53:59.616683 | orchestrator | 2026-04-09 06:53:59.616702 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-09 06:53:59.616721 | orchestrator | Thursday 09 April 2026 06:53:55 +0000 (0:00:01.967) 0:03:21.270 ******** 2026-04-09 06:53:59.616740 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-09 06:53:59.616759 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-09 06:53:59.616778 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-09 06:53:59.616798 | orchestrator | 2026-04-09 06:53:59.616818 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-09 06:53:59.616839 | orchestrator | Thursday 09 April 2026 06:53:57 +0000 (0:00:02.231) 0:03:23.502 ******** 2026-04-09 06:53:59.616858 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-09 06:53:59.616878 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:53:59.616891 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-09 06:53:59.616904 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:53:59.616934 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-09 06:53:59.616948 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:53:59.616961 | orchestrator | 2026-04-09 06:53:59.616972 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-09 06:53:59.616983 | orchestrator | Thursday 09 April 2026 06:53:59 +0000 (0:00:01.549) 0:03:25.051 ******** 2026-04-09 06:53:59.616994 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 06:53:59.617005 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 06:53:59.617016 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:53:59.617037 | orchestrator | ok: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-09 06:54:08.051103 | orchestrator | ok: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-09 06:54:08.051232 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 06:54:08.051251 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 06:54:08.051264 | orchestrator | ok: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-09 06:54:08.051276 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:54:08.051336 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 06:54:08.051348 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 06:54:08.051360 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:54:08.051372 | orchestrator | ok: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-09 06:54:08.051383 | orchestrator | ok: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-09 06:54:08.051395 | orchestrator | ok: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-09 06:54:08.051406 | orchestrator | 2026-04-09 06:54:08.051419 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-09 06:54:08.051430 | orchestrator | Thursday 09 April 2026 06:54:01 +0000 (0:00:02.398) 0:03:27.450 ******** 2026-04-09 06:54:08.051441 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:54:08.051453 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:54:08.051464 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:54:08.051476 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:54:08.051488 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:54:08.051499 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:54:08.051511 | orchestrator | 2026-04-09 06:54:08.051523 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-09 06:54:08.051534 | orchestrator | Thursday 09 April 2026 06:54:03 +0000 (0:00:02.301) 0:03:29.751 ******** 2026-04-09 06:54:08.051545 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:54:08.051557 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:54:08.051568 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:54:08.051579 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:54:08.051590 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:54:08.051602 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:54:08.051613 | orchestrator | 2026-04-09 06:54:08.051627 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-09 06:54:08.051641 | orchestrator | Thursday 09 April 2026 06:54:06 +0000 (0:00:02.445) 0:03:32.197 ******** 2026-04-09 06:54:08.051659 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 06:54:08.051699 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 06:54:08.051730 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 06:54:08.051766 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 06:54:08.051781 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 06:54:08.051796 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:54:08.051812 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:54:08.051834 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:54:08.051861 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 06:54:14.248283 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:54:14.248450 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:54:14.248468 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:54:14.248503 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:54:14.248516 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:54:14.248544 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:54:14.248557 | orchestrator | 2026-04-09 06:54:14.248591 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-09 06:54:14.248605 | orchestrator | Thursday 09 April 2026 06:54:09 +0000 (0:00:03.547) 0:03:35.744 ******** 2026-04-09 06:54:14.248618 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 06:54:14.248631 | orchestrator | 2026-04-09 06:54:14.248642 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-09 06:54:14.248653 | orchestrator | Thursday 09 April 2026 06:54:11 +0000 (0:00:02.195) 0:03:37.939 ******** 2026-04-09 06:54:14.248666 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 06:54:14.248679 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 06:54:14.248699 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 06:54:14.248716 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:54:14.248737 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:54:17.825849 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 06:54:17.825959 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 06:54:17.825999 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:54:17.826013 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 06:54:17.826088 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:54:17.826116 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:54:17.826150 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:54:17.826163 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:54:17.826184 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:54:17.826196 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:54:17.826208 | orchestrator | 2026-04-09 06:54:17.826221 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-09 06:54:17.826234 | orchestrator | Thursday 09 April 2026 06:54:16 +0000 (0:00:04.615) 0:03:42.554 ******** 2026-04-09 06:54:17.826254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:54:17.826275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:54:18.712611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:54:18.712704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:54:18.712713 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:54:18.712719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:54:18.712735 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:54:18.712742 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:54:18.712760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:54:18.712771 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:54:18.712776 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:54:18.712782 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:54:18.712788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:54:18.712794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:54:18.712803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:54:18.712808 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:54:18.712814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:54:18.712819 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:54:18.712830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:54:22.122427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:54:22.122533 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:54:22.122550 | orchestrator | 2026-04-09 06:54:22.122561 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-09 06:54:22.122572 | orchestrator | Thursday 09 April 2026 06:54:19 +0000 (0:00:03.413) 0:03:45.968 ******** 2026-04-09 06:54:22.122584 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:54:22.122612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:54:22.122624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:54:22.122661 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:54:22.122713 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:54:22.122726 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:54:22.122736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:54:22.122747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:54:22.122757 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:54:22.122772 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:54:22.122783 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:54:22.122800 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:54:22.122819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:54:51.754714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:54:51.754857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:54:51.754885 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:54:51.754910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:54:51.754953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:54:51.754976 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:54:51.754998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:54:51.755056 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:54:51.755078 | orchestrator | 2026-04-09 06:54:51.755100 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-09 06:54:51.755122 | orchestrator | Thursday 09 April 2026 06:54:23 +0000 (0:00:03.811) 0:03:49.780 ******** 2026-04-09 06:54:51.755142 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:54:51.755162 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:54:51.755181 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:54:51.755202 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 06:54:51.755223 | orchestrator | 2026-04-09 06:54:51.755243 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-09 06:54:51.755263 | orchestrator | Thursday 09 April 2026 06:54:26 +0000 (0:00:02.404) 0:03:52.184 ******** 2026-04-09 06:54:51.755283 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 06:54:51.755303 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 06:54:51.755323 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 06:54:51.755343 | orchestrator | 2026-04-09 06:54:51.755364 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-09 06:54:51.755408 | orchestrator | Thursday 09 April 2026 06:54:28 +0000 (0:00:02.077) 0:03:54.262 ******** 2026-04-09 06:54:51.755527 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 06:54:51.755552 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 06:54:51.755572 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 06:54:51.755593 | orchestrator | 2026-04-09 06:54:51.755613 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-09 06:54:51.755632 | orchestrator | Thursday 09 April 2026 06:54:30 +0000 (0:00:02.088) 0:03:56.351 ******** 2026-04-09 06:54:51.755652 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:54:51.755670 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:54:51.755688 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:54:51.755707 | orchestrator | 2026-04-09 06:54:51.755726 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-09 06:54:51.755744 | orchestrator | Thursday 09 April 2026 06:54:32 +0000 (0:00:01.841) 0:03:58.192 ******** 2026-04-09 06:54:51.755763 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:54:51.755781 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:54:51.755799 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:54:51.755817 | orchestrator | 2026-04-09 06:54:51.755836 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-09 06:54:51.755855 | orchestrator | Thursday 09 April 2026 06:54:33 +0000 (0:00:01.629) 0:03:59.822 ******** 2026-04-09 06:54:51.755873 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-09 06:54:51.755893 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-09 06:54:51.755913 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-09 06:54:51.755933 | orchestrator | 2026-04-09 06:54:51.755954 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-09 06:54:51.755973 | orchestrator | Thursday 09 April 2026 06:54:36 +0000 (0:00:02.231) 0:04:02.053 ******** 2026-04-09 06:54:51.755993 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-09 06:54:51.756012 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-09 06:54:51.756030 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-09 06:54:51.756049 | orchestrator | 2026-04-09 06:54:51.756068 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-09 06:54:51.756108 | orchestrator | Thursday 09 April 2026 06:54:38 +0000 (0:00:02.188) 0:04:04.241 ******** 2026-04-09 06:54:51.756120 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-09 06:54:51.756131 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-09 06:54:51.756142 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-09 06:54:51.756153 | orchestrator | ok: [testbed-node-3] => (item=nova-libvirt) 2026-04-09 06:54:51.756164 | orchestrator | ok: [testbed-node-4] => (item=nova-libvirt) 2026-04-09 06:54:51.756175 | orchestrator | ok: [testbed-node-5] => (item=nova-libvirt) 2026-04-09 06:54:51.756185 | orchestrator | 2026-04-09 06:54:51.756197 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-09 06:54:51.756209 | orchestrator | Thursday 09 April 2026 06:54:43 +0000 (0:00:04.921) 0:04:09.163 ******** 2026-04-09 06:54:51.756220 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:54:51.756231 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:54:51.756242 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:54:51.756252 | orchestrator | 2026-04-09 06:54:51.756263 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-09 06:54:51.756274 | orchestrator | Thursday 09 April 2026 06:54:44 +0000 (0:00:01.370) 0:04:10.534 ******** 2026-04-09 06:54:51.756285 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:54:51.756306 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:54:51.756317 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:54:51.756328 | orchestrator | 2026-04-09 06:54:51.756337 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-09 06:54:51.756347 | orchestrator | Thursday 09 April 2026 06:54:45 +0000 (0:00:01.383) 0:04:11.917 ******** 2026-04-09 06:54:51.756357 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:54:51.756367 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:54:51.756376 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:54:51.756386 | orchestrator | 2026-04-09 06:54:51.756395 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-09 06:54:51.756405 | orchestrator | Thursday 09 April 2026 06:54:48 +0000 (0:00:02.520) 0:04:14.438 ******** 2026-04-09 06:54:51.756416 | orchestrator | ok: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-09 06:54:51.756458 | orchestrator | ok: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-09 06:54:51.756473 | orchestrator | ok: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-09 06:54:51.756483 | orchestrator | ok: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-09 06:54:51.756493 | orchestrator | ok: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-09 06:54:51.756503 | orchestrator | ok: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-09 06:54:51.756513 | orchestrator | 2026-04-09 06:54:51.756523 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-09 06:55:13.202331 | orchestrator | Thursday 09 April 2026 06:54:52 +0000 (0:00:04.311) 0:04:18.749 ******** 2026-04-09 06:55:13.202436 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-09 06:55:13.202451 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-09 06:55:13.202462 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-09 06:55:13.202491 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-09 06:55:13.202558 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:55:13.202570 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-09 06:55:13.202580 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:55:13.202590 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-09 06:55:13.202600 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:55:13.202611 | orchestrator | 2026-04-09 06:55:13.202622 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-04-09 06:55:13.202632 | orchestrator | Thursday 09 April 2026 06:54:57 +0000 (0:00:04.418) 0:04:23.168 ******** 2026-04-09 06:55:13.202642 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:55:13.202653 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:55:13.202663 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:55:13.202673 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-4, testbed-node-5, testbed-node-3 2026-04-09 06:55:13.202684 | orchestrator | 2026-04-09 06:55:13.202694 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-04-09 06:55:13.202704 | orchestrator | Thursday 09 April 2026 06:55:00 +0000 (0:00:03.321) 0:04:26.490 ******** 2026-04-09 06:55:13.202714 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 06:55:13.202724 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 06:55:13.202734 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 06:55:13.202744 | orchestrator | 2026-04-09 06:55:13.202754 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-04-09 06:55:13.202765 | orchestrator | Thursday 09 April 2026 06:55:02 +0000 (0:00:02.406) 0:04:28.896 ******** 2026-04-09 06:55:13.202774 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:55:13.202785 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:55:13.202795 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:55:13.202805 | orchestrator | 2026-04-09 06:55:13.202815 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-09 06:55:13.202825 | orchestrator | Thursday 09 April 2026 06:55:04 +0000 (0:00:01.528) 0:04:30.425 ******** 2026-04-09 06:55:13.202834 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:55:13.202845 | orchestrator | 2026-04-09 06:55:13.202855 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-09 06:55:13.202865 | orchestrator | Thursday 09 April 2026 06:55:05 +0000 (0:00:01.144) 0:04:31.570 ******** 2026-04-09 06:55:13.202876 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:55:13.202889 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:55:13.202900 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:55:13.202913 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:55:13.202925 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:55:13.202936 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:55:13.202948 | orchestrator | 2026-04-09 06:55:13.202960 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-09 06:55:13.202972 | orchestrator | Thursday 09 April 2026 06:55:07 +0000 (0:00:01.636) 0:04:33.206 ******** 2026-04-09 06:55:13.202982 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 06:55:13.202992 | orchestrator | 2026-04-09 06:55:13.203015 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-09 06:55:13.203026 | orchestrator | Thursday 09 April 2026 06:55:08 +0000 (0:00:01.678) 0:04:34.885 ******** 2026-04-09 06:55:13.203036 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:55:13.203046 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:55:13.203056 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:55:13.203066 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:55:13.203076 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:55:13.203087 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:55:13.203097 | orchestrator | 2026-04-09 06:55:13.203107 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-09 06:55:13.203117 | orchestrator | Thursday 09 April 2026 06:55:10 +0000 (0:00:01.830) 0:04:36.716 ******** 2026-04-09 06:55:13.203139 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 06:55:13.203170 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 06:55:13.203182 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 06:55:13.203193 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:55:13.203210 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:55:13.203227 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 06:55:13.203239 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 06:55:13.203257 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 06:55:16.874505 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:55:16.874634 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:55:16.874651 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:55:16.874684 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:55:16.874720 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:55:16.874732 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:55:16.874763 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:55:16.874776 | orchestrator | 2026-04-09 06:55:16.874790 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-09 06:55:16.874802 | orchestrator | Thursday 09 April 2026 06:55:15 +0000 (0:00:04.749) 0:04:41.465 ******** 2026-04-09 06:55:16.874815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:55:16.874833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:55:16.874853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:55:16.874865 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:55:16.874885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:55:28.904783 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:55:28.904916 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:55:28.904957 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:55:28.904972 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:55:28.904984 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:55:28.905011 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:55:28.905024 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:55:28.905041 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:55:28.905061 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:55:28.905072 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:55:28.905084 | orchestrator | 2026-04-09 06:55:28.905098 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-09 06:55:28.905110 | orchestrator | Thursday 09 April 2026 06:55:23 +0000 (0:00:08.018) 0:04:49.484 ******** 2026-04-09 06:55:28.905121 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:55:28.905134 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:55:28.905144 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:55:28.905155 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:55:28.905166 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:55:28.905177 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:55:28.905188 | orchestrator | 2026-04-09 06:55:28.905199 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-09 06:55:28.905210 | orchestrator | Thursday 09 April 2026 06:55:26 +0000 (0:00:03.063) 0:04:52.548 ******** 2026-04-09 06:55:28.905222 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-09 06:55:28.905233 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-09 06:55:28.905244 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-09 06:55:28.905255 | orchestrator | ok: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-09 06:55:28.905265 | orchestrator | ok: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-09 06:55:28.905277 | orchestrator | ok: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-09 06:55:28.905288 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-09 06:55:28.905300 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:55:28.905312 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-09 06:55:28.905325 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:55:28.905345 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-09 06:55:59.449504 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:55:59.449634 | orchestrator | ok: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-09 06:55:59.449700 | orchestrator | ok: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-09 06:55:59.449709 | orchestrator | ok: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-09 06:55:59.449716 | orchestrator | 2026-04-09 06:55:59.449724 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-09 06:55:59.449732 | orchestrator | Thursday 09 April 2026 06:55:31 +0000 (0:00:04.720) 0:04:57.268 ******** 2026-04-09 06:55:59.449739 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:55:59.449746 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:55:59.449752 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:55:59.449759 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:55:59.449766 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:55:59.449772 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:55:59.449779 | orchestrator | 2026-04-09 06:55:59.449786 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-09 06:55:59.449793 | orchestrator | Thursday 09 April 2026 06:55:33 +0000 (0:00:02.002) 0:04:59.270 ******** 2026-04-09 06:55:59.449800 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-09 06:55:59.449807 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-09 06:55:59.449814 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-09 06:55:59.449833 | orchestrator | ok: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-09 06:55:59.449841 | orchestrator | ok: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-09 06:55:59.449848 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-09 06:55:59.449855 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-09 06:55:59.449862 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-09 06:55:59.449868 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-09 06:55:59.449875 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:55:59.449882 | orchestrator | ok: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-09 06:55:59.449889 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-09 06:55:59.449895 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:55:59.449902 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-09 06:55:59.449908 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:55:59.449915 | orchestrator | ok: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-09 06:55:59.449921 | orchestrator | ok: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-09 06:55:59.449928 | orchestrator | ok: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-09 06:55:59.449934 | orchestrator | ok: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-09 06:55:59.449940 | orchestrator | ok: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-09 06:55:59.449947 | orchestrator | ok: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-09 06:55:59.449962 | orchestrator | 2026-04-09 06:55:59.449969 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-09 06:55:59.449975 | orchestrator | Thursday 09 April 2026 06:55:40 +0000 (0:00:06.816) 0:05:06.087 ******** 2026-04-09 06:55:59.449981 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 06:55:59.449987 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 06:55:59.449993 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 06:55:59.449999 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-09 06:55:59.450005 | orchestrator | ok: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 06:55:59.450011 | orchestrator | ok: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 06:55:59.450067 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-09 06:55:59.450077 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-09 06:55:59.450085 | orchestrator | ok: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 06:55:59.450111 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 06:55:59.450118 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 06:55:59.450126 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 06:55:59.450133 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-09 06:55:59.450140 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:55:59.450148 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-09 06:55:59.450157 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:55:59.450165 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-09 06:55:59.450173 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:55:59.450181 | orchestrator | ok: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 06:55:59.450189 | orchestrator | ok: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 06:55:59.450196 | orchestrator | ok: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 06:55:59.450203 | orchestrator | ok: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 06:55:59.450210 | orchestrator | ok: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 06:55:59.450217 | orchestrator | ok: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 06:55:59.450229 | orchestrator | ok: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 06:55:59.450237 | orchestrator | ok: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 06:55:59.450244 | orchestrator | ok: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 06:55:59.450252 | orchestrator | 2026-04-09 06:55:59.450258 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-09 06:55:59.450266 | orchestrator | Thursday 09 April 2026 06:55:48 +0000 (0:00:08.537) 0:05:14.624 ******** 2026-04-09 06:55:59.450273 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:55:59.450280 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:55:59.450287 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:55:59.450294 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:55:59.450302 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:55:59.450308 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:55:59.450315 | orchestrator | 2026-04-09 06:55:59.450323 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-09 06:55:59.450338 | orchestrator | Thursday 09 April 2026 06:55:50 +0000 (0:00:01.848) 0:05:16.472 ******** 2026-04-09 06:55:59.450346 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:55:59.450354 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:55:59.450361 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:55:59.450369 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:55:59.450377 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:55:59.450383 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:55:59.450390 | orchestrator | 2026-04-09 06:55:59.450397 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-09 06:55:59.450405 | orchestrator | Thursday 09 April 2026 06:55:52 +0000 (0:00:01.979) 0:05:18.452 ******** 2026-04-09 06:55:59.450411 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:55:59.450417 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:55:59.450424 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:55:59.450430 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:55:59.450435 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:55:59.450441 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:55:59.450447 | orchestrator | 2026-04-09 06:55:59.450452 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-04-09 06:55:59.450458 | orchestrator | Thursday 09 April 2026 06:55:55 +0000 (0:00:03.015) 0:05:21.467 ******** 2026-04-09 06:55:59.450464 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:55:59.450470 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:55:59.450476 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:55:59.450482 | orchestrator | ok: [testbed-node-3] 2026-04-09 06:55:59.450488 | orchestrator | ok: [testbed-node-4] 2026-04-09 06:55:59.450494 | orchestrator | ok: [testbed-node-5] 2026-04-09 06:55:59.450500 | orchestrator | 2026-04-09 06:55:59.450507 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-09 06:55:59.450513 | orchestrator | Thursday 09 April 2026 06:55:58 +0000 (0:00:03.192) 0:05:24.659 ******** 2026-04-09 06:55:59.450524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:55:59.450547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:56:00.747793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:56:00.747918 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:56:00.747940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:56:00.747955 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:56:00.747968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:56:00.747980 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:56:00.747991 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:56:00.748022 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:56:00.748049 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:56:00.748061 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:56:00.748073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:56:00.748085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:56:00.748097 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:56:00.748108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:56:00.748120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:56:00.748131 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:56:00.748150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:56:07.223450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:56:07.223576 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:56:07.223602 | orchestrator | 2026-04-09 06:56:07.223616 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-09 06:56:07.223628 | orchestrator | Thursday 09 April 2026 06:56:01 +0000 (0:00:03.250) 0:05:27.909 ******** 2026-04-09 06:56:07.223640 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-09 06:56:07.223652 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-09 06:56:07.223702 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:56:07.223715 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-09 06:56:07.223727 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-09 06:56:07.223738 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:56:07.223749 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-09 06:56:07.223760 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-09 06:56:07.223771 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:56:07.223783 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-09 06:56:07.223794 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-09 06:56:07.223805 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:56:07.223816 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-09 06:56:07.223827 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-09 06:56:07.223838 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:56:07.223849 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-09 06:56:07.223860 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-09 06:56:07.223871 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:56:07.223882 | orchestrator | 2026-04-09 06:56:07.223893 | orchestrator | TASK [service-check-containers : nova_cell | Check containers] ***************** 2026-04-09 06:56:07.223905 | orchestrator | Thursday 09 April 2026 06:56:04 +0000 (0:00:02.250) 0:05:30.160 ******** 2026-04-09 06:56:07.223918 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 06:56:07.223953 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 06:56:07.223992 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 06:56:07.224008 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 06:56:07.224024 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:56:07.224038 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 06:56:07.224051 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:56:07.224073 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 06:56:07.224095 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 06:56:12.329647 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:56:12.330714 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:56:12.330756 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:56:12.330773 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:56:12.330812 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 06:56:12.330857 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 06:56:12.330871 | orchestrator | 2026-04-09 06:56:12.330886 | orchestrator | TASK [service-check-containers : nova_cell | Notify handlers to restart containers] *** 2026-04-09 06:56:12.330898 | orchestrator | Thursday 09 April 2026 06:56:09 +0000 (0:00:05.205) 0:05:35.365 ******** 2026-04-09 06:56:12.330911 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 06:56:12.330924 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:56:12.330936 | orchestrator | } 2026-04-09 06:56:12.330947 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 06:56:12.330958 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:56:12.330969 | orchestrator | } 2026-04-09 06:56:12.330980 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 06:56:12.330991 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:56:12.331002 | orchestrator | } 2026-04-09 06:56:12.331013 | orchestrator | ok: [testbed-node-0] => { 2026-04-09 06:56:12.331023 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:56:12.331034 | orchestrator | } 2026-04-09 06:56:12.331045 | orchestrator | ok: [testbed-node-1] => { 2026-04-09 06:56:12.331056 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:56:12.331067 | orchestrator | } 2026-04-09 06:56:12.331078 | orchestrator | ok: [testbed-node-2] => { 2026-04-09 06:56:12.331088 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:56:12.331099 | orchestrator | } 2026-04-09 06:56:12.331111 | orchestrator | 2026-04-09 06:56:12.331122 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 06:56:12.331133 | orchestrator | Thursday 09 April 2026 06:56:11 +0000 (0:00:01.860) 0:05:37.226 ******** 2026-04-09 06:56:12.331145 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:56:12.331166 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:56:12.331178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:56:12.331190 | orchestrator | skipping: [testbed-node-3] 2026-04-09 06:56:12.331216 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:56:16.289147 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:56:16.289315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:56:16.289343 | orchestrator | skipping: [testbed-node-4] 2026-04-09 06:56:16.289351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 06:56:16.289357 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 06:56:16.289362 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 06:56:16.289367 | orchestrator | skipping: [testbed-node-5] 2026-04-09 06:56:16.289401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:56:16.289412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:56:16.289426 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:56:16.289438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:56:16.289448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:56:16.289456 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:56:16.289463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 06:56:16.289471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 06:56:16.289479 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:56:16.289487 | orchestrator | 2026-04-09 06:56:16.289496 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 06:56:16.289506 | orchestrator | Thursday 09 April 2026 06:56:14 +0000 (0:00:03.637) 0:05:40.864 ******** 2026-04-09 06:56:16.289514 | orchestrator | 2026-04-09 06:56:16.289521 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 06:56:16.289530 | orchestrator | Thursday 09 April 2026 06:56:15 +0000 (0:00:00.518) 0:05:41.382 ******** 2026-04-09 06:56:16.289538 | orchestrator | 2026-04-09 06:56:16.289550 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 06:56:16.289558 | orchestrator | Thursday 09 April 2026 06:56:16 +0000 (0:00:00.734) 0:05:42.116 ******** 2026-04-09 06:56:16.289566 | orchestrator | 2026-04-09 06:56:16.289582 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 06:57:52.196395 | orchestrator | Thursday 09 April 2026 06:56:16 +0000 (0:00:00.568) 0:05:42.685 ******** 2026-04-09 06:57:52.196544 | orchestrator | 2026-04-09 06:57:52.196565 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 06:57:52.196577 | orchestrator | Thursday 09 April 2026 06:56:17 +0000 (0:00:00.531) 0:05:43.217 ******** 2026-04-09 06:57:52.196614 | orchestrator | 2026-04-09 06:57:52.196626 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 06:57:52.196637 | orchestrator | Thursday 09 April 2026 06:56:17 +0000 (0:00:00.533) 0:05:43.750 ******** 2026-04-09 06:57:52.196648 | orchestrator | 2026-04-09 06:57:52.196659 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-09 06:57:52.196670 | orchestrator | 2026-04-09 06:57:52.196681 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-09 06:57:52.196692 | orchestrator | Thursday 09 April 2026 06:56:19 +0000 (0:00:02.030) 0:05:45.780 ******** 2026-04-09 06:57:52.196703 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:57:52.196715 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:57:52.196726 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:57:52.196737 | orchestrator | 2026-04-09 06:57:52.196748 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-09 06:57:52.196759 | orchestrator | 2026-04-09 06:57:52.196770 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-09 06:57:52.196781 | orchestrator | Thursday 09 April 2026 06:56:21 +0000 (0:00:01.720) 0:05:47.501 ******** 2026-04-09 06:57:52.196792 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:57:52.196803 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:57:52.196814 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:57:52.196825 | orchestrator | 2026-04-09 06:57:52.196836 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-09 06:57:52.196847 | orchestrator | 2026-04-09 06:57:52.196858 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-09 06:57:52.196869 | orchestrator | Thursday 09 April 2026 06:56:24 +0000 (0:00:02.565) 0:05:50.066 ******** 2026-04-09 06:57:52.196880 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-09 06:57:52.196891 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-09 06:57:52.196902 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-09 06:57:52.196914 | orchestrator | changed: [testbed-node-1] => (item=nova-conductor) 2026-04-09 06:57:52.196925 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-09 06:57:52.196936 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-09 06:57:52.196997 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-09 06:57:52.197010 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-09 06:57:52.197021 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-09 06:57:52.197032 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-09 06:57:52.197043 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-09 06:57:52.197054 | orchestrator | changed: [testbed-node-2] => (item=nova-conductor) 2026-04-09 06:57:52.197065 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-09 06:57:52.197076 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-09 06:57:52.197087 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-09 06:57:52.197098 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-09 06:57:52.197108 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-09 06:57:52.197119 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-09 06:57:52.197130 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-09 06:57:52.197141 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-09 06:57:52.197152 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-09 06:57:52.197163 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-09 06:57:52.197173 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-09 06:57:52.197184 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-09 06:57:52.197203 | orchestrator | changed: [testbed-node-0] => (item=nova-conductor) 2026-04-09 06:57:52.197214 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-09 06:57:52.197225 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-09 06:57:52.197236 | orchestrator | changed: [testbed-node-1] => (item=nova-novncproxy) 2026-04-09 06:57:52.197247 | orchestrator | changed: [testbed-node-2] => (item=nova-novncproxy) 2026-04-09 06:57:52.197258 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-09 06:57:52.197269 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-09 06:57:52.197280 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-09 06:57:52.197291 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-09 06:57:52.197302 | orchestrator | changed: [testbed-node-0] => (item=nova-novncproxy) 2026-04-09 06:57:52.197313 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-09 06:57:52.197324 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-09 06:57:52.197335 | orchestrator | 2026-04-09 06:57:52.197346 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-09 06:57:52.197357 | orchestrator | 2026-04-09 06:57:52.197383 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-09 06:57:52.197394 | orchestrator | Thursday 09 April 2026 06:56:59 +0000 (0:00:35.408) 0:06:25.475 ******** 2026-04-09 06:57:52.197405 | orchestrator | changed: [testbed-node-0] => (item=nova-scheduler) 2026-04-09 06:57:52.197435 | orchestrator | changed: [testbed-node-1] => (item=nova-scheduler) 2026-04-09 06:57:52.197447 | orchestrator | changed: [testbed-node-2] => (item=nova-scheduler) 2026-04-09 06:57:52.197458 | orchestrator | changed: [testbed-node-1] => (item=nova-api) 2026-04-09 06:57:52.197469 | orchestrator | changed: [testbed-node-0] => (item=nova-api) 2026-04-09 06:57:52.197479 | orchestrator | changed: [testbed-node-2] => (item=nova-api) 2026-04-09 06:57:52.197490 | orchestrator | 2026-04-09 06:57:52.197501 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-09 06:57:52.197513 | orchestrator | 2026-04-09 06:57:52.197523 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-09 06:57:52.197535 | orchestrator | Thursday 09 April 2026 06:57:19 +0000 (0:00:19.956) 0:06:45.431 ******** 2026-04-09 06:57:52.197546 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:57:52.197557 | orchestrator | 2026-04-09 06:57:52.197568 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-09 06:57:52.197579 | orchestrator | 2026-04-09 06:57:52.197590 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-09 06:57:52.197601 | orchestrator | Thursday 09 April 2026 06:57:36 +0000 (0:00:17.524) 0:07:02.956 ******** 2026-04-09 06:57:52.197612 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:57:52.197623 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:57:52.197635 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:57:52.197646 | orchestrator | 2026-04-09 06:57:52.197657 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 06:57:52.197668 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 06:57:52.197681 | orchestrator | testbed-node-0 : ok=39  changed=8  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-09 06:57:52.197693 | orchestrator | testbed-node-1 : ok=27  changed=5  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-04-09 06:57:52.197704 | orchestrator | testbed-node-2 : ok=27  changed=5  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-04-09 06:57:52.197715 | orchestrator | testbed-node-3 : ok=43  changed=5  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-09 06:57:52.197733 | orchestrator | testbed-node-4 : ok=37  changed=4  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-09 06:57:52.197744 | orchestrator | testbed-node-5 : ok=37  changed=4  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-09 06:57:52.197755 | orchestrator | 2026-04-09 06:57:52.197766 | orchestrator | 2026-04-09 06:57:52.197778 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 06:57:52.197789 | orchestrator | Thursday 09 April 2026 06:57:51 +0000 (0:00:14.725) 0:07:17.681 ******** 2026-04-09 06:57:52.197800 | orchestrator | =============================================================================== 2026-04-09 06:57:52.197811 | orchestrator | nova-cell : Reload nova cell services to remove RPC version cap -------- 35.41s 2026-04-09 06:57:52.197822 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.96s 2026-04-09 06:57:52.197833 | orchestrator | nova : Reload nova API services to remove RPC version pin -------------- 19.96s 2026-04-09 06:57:52.197844 | orchestrator | nova : Run Nova upgrade checks ----------------------------------------- 19.43s 2026-04-09 06:57:52.197855 | orchestrator | nova : Run Nova API online database migrations ------------------------- 17.52s 2026-04-09 06:57:52.197866 | orchestrator | nova-cell : Run Nova cell online database migrations ------------------- 14.73s 2026-04-09 06:57:52.197877 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 13.39s 2026-04-09 06:57:52.197888 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.41s 2026-04-09 06:57:52.197899 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 8.54s 2026-04-09 06:57:52.197910 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 8.02s 2026-04-09 06:57:52.197921 | orchestrator | nova-cell : Copying over libvirt SASL configuration --------------------- 6.82s 2026-04-09 06:57:52.197932 | orchestrator | service-check-containers : nova_cell | Check containers ----------------- 5.20s 2026-04-09 06:57:52.197968 | orchestrator | service-check-containers : nova | Check containers ---------------------- 5.05s 2026-04-09 06:57:52.197990 | orchestrator | nova-cell : Copy over ceph.conf ----------------------------------------- 4.92s 2026-04-09 06:57:52.198009 | orchestrator | nova-cell : Flush handlers ---------------------------------------------- 4.92s 2026-04-09 06:57:52.198159 | orchestrator | nova-cell : Get container facts ----------------------------------------- 4.86s 2026-04-09 06:57:52.198172 | orchestrator | nova-cell : Get current Libvirt version --------------------------------- 4.77s 2026-04-09 06:57:52.198183 | orchestrator | nova-cell : Copying over config.json files for services ----------------- 4.75s 2026-04-09 06:57:52.198194 | orchestrator | nova-cell : Copying over libvirt configuration -------------------------- 4.72s 2026-04-09 06:57:52.198213 | orchestrator | service-cert-copy : nova | Copying over extra CA certificates ----------- 4.62s 2026-04-09 06:57:52.382433 | orchestrator | + osism apply -a upgrade horizon 2026-04-09 06:57:53.738918 | orchestrator | 2026-04-09 06:57:53 | INFO  | Prepare task for execution of horizon. 2026-04-09 06:57:53.804818 | orchestrator | 2026-04-09 06:57:53 | INFO  | Task eaee6b36-a3ff-40eb-a52d-983656cb34ba (horizon) was prepared for execution. 2026-04-09 06:57:53.804908 | orchestrator | 2026-04-09 06:57:53 | INFO  | It takes a moment until task eaee6b36-a3ff-40eb-a52d-983656cb34ba (horizon) has been started and output is visible here. 2026-04-09 06:58:03.409509 | orchestrator | 2026-04-09 06:58:03.410433 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 06:58:03.410469 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-09 06:58:03.410483 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-09 06:58:03.410534 | orchestrator | 2026-04-09 06:58:03.410546 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 06:58:03.410557 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-09 06:58:03.410568 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-09 06:58:03.410590 | orchestrator | Thursday 09 April 2026 06:57:58 +0000 (0:00:01.118) 0:00:01.118 ******** 2026-04-09 06:58:03.410601 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:58:03.410613 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:58:03.410625 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:58:03.410636 | orchestrator | 2026-04-09 06:58:03.410647 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 06:58:03.410658 | orchestrator | Thursday 09 April 2026 06:57:59 +0000 (0:00:01.089) 0:00:02.208 ******** 2026-04-09 06:58:03.410669 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-09 06:58:03.410681 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-09 06:58:03.410692 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-09 06:58:03.410703 | orchestrator | 2026-04-09 06:58:03.410713 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-09 06:58:03.410724 | orchestrator | 2026-04-09 06:58:03.410735 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-09 06:58:03.410746 | orchestrator | Thursday 09 April 2026 06:58:00 +0000 (0:00:01.141) 0:00:03.350 ******** 2026-04-09 06:58:03.410757 | orchestrator | included: /ansible/roles/horizon/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 06:58:03.410769 | orchestrator | 2026-04-09 06:58:03.410780 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-09 06:58:03.410791 | orchestrator | Thursday 09 April 2026 06:58:01 +0000 (0:00:01.312) 0:00:04.662 ******** 2026-04-09 06:58:03.410823 | orchestrator | ok: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 06:58:03.410871 | orchestrator | ok: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 06:58:03.410900 | orchestrator | ok: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 06:58:10.506682 | orchestrator | 2026-04-09 06:58:10.506766 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-09 06:58:10.506779 | orchestrator | Thursday 09 April 2026 06:58:03 +0000 (0:00:01.682) 0:00:06.344 ******** 2026-04-09 06:58:10.506786 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:58:10.506791 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:58:10.506795 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:58:10.506799 | orchestrator | 2026-04-09 06:58:10.506803 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-09 06:58:10.506807 | orchestrator | Thursday 09 April 2026 06:58:03 +0000 (0:00:00.333) 0:00:06.678 ******** 2026-04-09 06:58:10.506812 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-09 06:58:10.506816 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-09 06:58:10.506820 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-09 06:58:10.506824 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-09 06:58:10.506828 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-09 06:58:10.506832 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-09 06:58:10.506835 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-09 06:58:10.506839 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-09 06:58:10.506843 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-09 06:58:10.506847 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-09 06:58:10.506853 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-09 06:58:10.506858 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-09 06:58:10.506864 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-09 06:58:10.506870 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-09 06:58:10.506875 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-09 06:58:10.506882 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-09 06:58:10.506888 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-09 06:58:10.506894 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-09 06:58:10.506900 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-09 06:58:10.506906 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-09 06:58:10.506912 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-09 06:58:10.506918 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-09 06:58:10.506923 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-09 06:58:10.506929 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-09 06:58:10.506970 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-09 06:58:10.506979 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-09 06:58:10.506983 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-09 06:58:10.507035 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-09 06:58:10.507040 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-09 06:58:10.507054 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-09 06:58:10.507059 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-09 06:58:10.507066 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-09 06:58:10.507072 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-09 06:58:10.507114 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-09 06:58:10.507121 | orchestrator | 2026-04-09 06:58:10.507127 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 06:58:10.507133 | orchestrator | Thursday 09 April 2026 06:58:05 +0000 (0:00:01.710) 0:00:08.389 ******** 2026-04-09 06:58:10.507139 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:58:10.507145 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:58:10.507152 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:58:10.507158 | orchestrator | 2026-04-09 06:58:10.507164 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 06:58:10.507170 | orchestrator | Thursday 09 April 2026 06:58:05 +0000 (0:00:00.381) 0:00:08.770 ******** 2026-04-09 06:58:10.507176 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:58:10.507180 | orchestrator | 2026-04-09 06:58:10.507184 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 06:58:10.507188 | orchestrator | Thursday 09 April 2026 06:58:06 +0000 (0:00:00.147) 0:00:08.917 ******** 2026-04-09 06:58:10.507192 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:58:10.507195 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:58:10.507199 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:58:10.507203 | orchestrator | 2026-04-09 06:58:10.507207 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 06:58:10.507210 | orchestrator | Thursday 09 April 2026 06:58:06 +0000 (0:00:00.293) 0:00:09.210 ******** 2026-04-09 06:58:10.507214 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:58:10.507219 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:58:10.507225 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:58:10.507231 | orchestrator | 2026-04-09 06:58:10.507237 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 06:58:10.507243 | orchestrator | Thursday 09 April 2026 06:58:06 +0000 (0:00:00.528) 0:00:09.739 ******** 2026-04-09 06:58:10.507249 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:58:10.507255 | orchestrator | 2026-04-09 06:58:10.507261 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 06:58:10.507268 | orchestrator | Thursday 09 April 2026 06:58:07 +0000 (0:00:00.139) 0:00:09.878 ******** 2026-04-09 06:58:10.507279 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:58:10.507283 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:58:10.507288 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:58:10.507294 | orchestrator | 2026-04-09 06:58:10.507301 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 06:58:10.507307 | orchestrator | Thursday 09 April 2026 06:58:07 +0000 (0:00:00.318) 0:00:10.197 ******** 2026-04-09 06:58:10.507313 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:58:10.507319 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:58:10.507325 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:58:10.507329 | orchestrator | 2026-04-09 06:58:10.507332 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 06:58:10.507336 | orchestrator | Thursday 09 April 2026 06:58:07 +0000 (0:00:00.327) 0:00:10.525 ******** 2026-04-09 06:58:10.507340 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:58:10.507346 | orchestrator | 2026-04-09 06:58:10.507352 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 06:58:10.507359 | orchestrator | Thursday 09 April 2026 06:58:07 +0000 (0:00:00.144) 0:00:10.669 ******** 2026-04-09 06:58:10.507365 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:58:10.507371 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:58:10.507377 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:58:10.507383 | orchestrator | 2026-04-09 06:58:10.507389 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 06:58:10.507396 | orchestrator | Thursday 09 April 2026 06:58:08 +0000 (0:00:00.516) 0:00:11.186 ******** 2026-04-09 06:58:10.507402 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:58:10.507406 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:58:10.507409 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:58:10.507413 | orchestrator | 2026-04-09 06:58:10.507417 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 06:58:10.507421 | orchestrator | Thursday 09 April 2026 06:58:08 +0000 (0:00:00.351) 0:00:11.538 ******** 2026-04-09 06:58:10.507425 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:58:10.507429 | orchestrator | 2026-04-09 06:58:10.507432 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 06:58:10.507436 | orchestrator | Thursday 09 April 2026 06:58:08 +0000 (0:00:00.132) 0:00:11.670 ******** 2026-04-09 06:58:10.507440 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:58:10.507444 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:58:10.507447 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:58:10.507451 | orchestrator | 2026-04-09 06:58:10.507455 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 06:58:10.507459 | orchestrator | Thursday 09 April 2026 06:58:09 +0000 (0:00:00.323) 0:00:11.994 ******** 2026-04-09 06:58:10.507463 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:58:10.507467 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:58:10.507470 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:58:10.507474 | orchestrator | 2026-04-09 06:58:10.507478 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 06:58:10.507482 | orchestrator | Thursday 09 April 2026 06:58:09 +0000 (0:00:00.510) 0:00:12.504 ******** 2026-04-09 06:58:10.507490 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:58:10.507494 | orchestrator | 2026-04-09 06:58:10.507497 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 06:58:10.507501 | orchestrator | Thursday 09 April 2026 06:58:09 +0000 (0:00:00.138) 0:00:12.643 ******** 2026-04-09 06:58:10.507505 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:58:10.507509 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:58:10.507512 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:58:10.507516 | orchestrator | 2026-04-09 06:58:10.507520 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 06:58:10.507524 | orchestrator | Thursday 09 April 2026 06:58:10 +0000 (0:00:00.310) 0:00:12.953 ******** 2026-04-09 06:58:10.507534 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:58:10.507541 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:58:10.507547 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:58:10.507552 | orchestrator | 2026-04-09 06:58:10.507559 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 06:58:10.507571 | orchestrator | Thursday 09 April 2026 06:58:10 +0000 (0:00:00.332) 0:00:13.286 ******** 2026-04-09 06:58:25.152918 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:58:25.153069 | orchestrator | 2026-04-09 06:58:25.153088 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 06:58:25.153101 | orchestrator | Thursday 09 April 2026 06:58:10 +0000 (0:00:00.118) 0:00:13.404 ******** 2026-04-09 06:58:25.153112 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:58:25.153124 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:58:25.153136 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:58:25.153147 | orchestrator | 2026-04-09 06:58:25.153158 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 06:58:25.153169 | orchestrator | Thursday 09 April 2026 06:58:11 +0000 (0:00:00.500) 0:00:13.905 ******** 2026-04-09 06:58:25.153180 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:58:25.153192 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:58:25.153203 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:58:25.153214 | orchestrator | 2026-04-09 06:58:25.153225 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 06:58:25.153237 | orchestrator | Thursday 09 April 2026 06:58:11 +0000 (0:00:00.338) 0:00:14.243 ******** 2026-04-09 06:58:25.153248 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:58:25.153260 | orchestrator | 2026-04-09 06:58:25.153271 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 06:58:25.153282 | orchestrator | Thursday 09 April 2026 06:58:11 +0000 (0:00:00.136) 0:00:14.380 ******** 2026-04-09 06:58:25.153293 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:58:25.153305 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:58:25.153320 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:58:25.153338 | orchestrator | 2026-04-09 06:58:25.153352 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 06:58:25.153369 | orchestrator | Thursday 09 April 2026 06:58:11 +0000 (0:00:00.297) 0:00:14.678 ******** 2026-04-09 06:58:25.153381 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:58:25.153396 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:58:25.153412 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:58:25.153423 | orchestrator | 2026-04-09 06:58:25.153434 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 06:58:25.153445 | orchestrator | Thursday 09 April 2026 06:58:12 +0000 (0:00:00.532) 0:00:15.210 ******** 2026-04-09 06:58:25.153456 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:58:25.153467 | orchestrator | 2026-04-09 06:58:25.153478 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 06:58:25.153489 | orchestrator | Thursday 09 April 2026 06:58:12 +0000 (0:00:00.162) 0:00:15.373 ******** 2026-04-09 06:58:25.153500 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:58:25.153511 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:58:25.153522 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:58:25.153533 | orchestrator | 2026-04-09 06:58:25.153544 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 06:58:25.153555 | orchestrator | Thursday 09 April 2026 06:58:12 +0000 (0:00:00.306) 0:00:15.679 ******** 2026-04-09 06:58:25.153566 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:58:25.153577 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:58:25.153588 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:58:25.153599 | orchestrator | 2026-04-09 06:58:25.153610 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 06:58:25.153621 | orchestrator | Thursday 09 April 2026 06:58:13 +0000 (0:00:00.320) 0:00:16.000 ******** 2026-04-09 06:58:25.153632 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:58:25.153667 | orchestrator | 2026-04-09 06:58:25.153678 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 06:58:25.153689 | orchestrator | Thursday 09 April 2026 06:58:13 +0000 (0:00:00.126) 0:00:16.127 ******** 2026-04-09 06:58:25.153700 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:58:25.153711 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:58:25.153722 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:58:25.153733 | orchestrator | 2026-04-09 06:58:25.153744 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 06:58:25.153755 | orchestrator | Thursday 09 April 2026 06:58:13 +0000 (0:00:00.524) 0:00:16.651 ******** 2026-04-09 06:58:25.153766 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:58:25.153777 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:58:25.153788 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:58:25.153798 | orchestrator | 2026-04-09 06:58:25.153809 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 06:58:25.153820 | orchestrator | Thursday 09 April 2026 06:58:14 +0000 (0:00:00.344) 0:00:16.995 ******** 2026-04-09 06:58:25.153831 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:58:25.153842 | orchestrator | 2026-04-09 06:58:25.153853 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 06:58:25.153864 | orchestrator | Thursday 09 April 2026 06:58:14 +0000 (0:00:00.142) 0:00:17.138 ******** 2026-04-09 06:58:25.153875 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:58:25.153886 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:58:25.153897 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:58:25.153908 | orchestrator | 2026-04-09 06:58:25.153919 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-09 06:58:25.153945 | orchestrator | Thursday 09 April 2026 06:58:14 +0000 (0:00:00.319) 0:00:17.458 ******** 2026-04-09 06:58:25.153956 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:58:25.153967 | orchestrator | changed: [testbed-node-1] 2026-04-09 06:58:25.153978 | orchestrator | changed: [testbed-node-2] 2026-04-09 06:58:25.153989 | orchestrator | 2026-04-09 06:58:25.154000 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-09 06:58:25.154011 | orchestrator | Thursday 09 April 2026 06:58:16 +0000 (0:00:01.769) 0:00:19.227 ******** 2026-04-09 06:58:25.154171 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-09 06:58:25.154184 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-09 06:58:25.154195 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-09 06:58:25.154207 | orchestrator | 2026-04-09 06:58:25.154218 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-09 06:58:25.154248 | orchestrator | Thursday 09 April 2026 06:58:18 +0000 (0:00:01.874) 0:00:21.102 ******** 2026-04-09 06:58:25.154260 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-09 06:58:25.154272 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-09 06:58:25.154283 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-09 06:58:25.154294 | orchestrator | 2026-04-09 06:58:25.154305 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-09 06:58:25.154316 | orchestrator | Thursday 09 April 2026 06:58:20 +0000 (0:00:01.842) 0:00:22.944 ******** 2026-04-09 06:58:25.154327 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-09 06:58:25.154338 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-09 06:58:25.154349 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-09 06:58:25.154360 | orchestrator | 2026-04-09 06:58:25.154371 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-09 06:58:25.154393 | orchestrator | Thursday 09 April 2026 06:58:21 +0000 (0:00:01.643) 0:00:24.588 ******** 2026-04-09 06:58:25.154404 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:58:25.154415 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:58:25.154426 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:58:25.154437 | orchestrator | 2026-04-09 06:58:25.154448 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-09 06:58:25.154459 | orchestrator | Thursday 09 April 2026 06:58:22 +0000 (0:00:00.331) 0:00:24.920 ******** 2026-04-09 06:58:25.154470 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:58:25.154481 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:58:25.154492 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:58:25.154502 | orchestrator | 2026-04-09 06:58:25.154513 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-09 06:58:25.154524 | orchestrator | Thursday 09 April 2026 06:58:22 +0000 (0:00:00.517) 0:00:25.437 ******** 2026-04-09 06:58:25.154535 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 06:58:25.154546 | orchestrator | 2026-04-09 06:58:25.154557 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-09 06:58:25.154568 | orchestrator | Thursday 09 April 2026 06:58:23 +0000 (0:00:00.973) 0:00:26.410 ******** 2026-04-09 06:58:25.154593 | orchestrator | ok: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 06:58:25.154620 | orchestrator | ok: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 06:58:26.076816 | orchestrator | ok: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 06:58:26.076933 | orchestrator | 2026-04-09 06:58:26.076948 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-09 06:58:26.076955 | orchestrator | Thursday 09 April 2026 06:58:25 +0000 (0:00:01.817) 0:00:28.227 ******** 2026-04-09 06:58:26.076976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 06:58:26.076983 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:58:26.076994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 06:58:26.077004 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:58:26.077014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 06:58:29.120422 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:58:29.120552 | orchestrator | 2026-04-09 06:58:29.120575 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-09 06:58:29.120615 | orchestrator | Thursday 09 April 2026 06:58:26 +0000 (0:00:00.742) 0:00:28.969 ******** 2026-04-09 06:58:29.120638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 06:58:29.120728 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:58:29.120785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 06:58:29.120807 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:58:29.120826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 06:58:29.120856 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:58:29.120873 | orchestrator | 2026-04-09 06:58:29.120890 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-04-09 06:58:29.120907 | orchestrator | Thursday 09 April 2026 06:58:27 +0000 (0:00:01.336) 0:00:30.306 ******** 2026-04-09 06:58:29.120956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 06:58:30.238354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 06:58:30.238511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 06:58:30.238551 | orchestrator | 2026-04-09 06:58:30.238564 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-04-09 06:58:30.238576 | orchestrator | Thursday 09 April 2026 06:58:29 +0000 (0:00:01.853) 0:00:32.160 ******** 2026-04-09 06:58:30.238587 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 06:58:30.238600 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:58:30.238616 | orchestrator | } 2026-04-09 06:58:30.238626 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 06:58:30.238636 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:58:30.238646 | orchestrator | } 2026-04-09 06:58:30.238655 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 06:58:30.238665 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 06:58:30.238675 | orchestrator | } 2026-04-09 06:58:30.238685 | orchestrator | 2026-04-09 06:58:30.238695 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 06:58:30.238705 | orchestrator | Thursday 09 April 2026 06:58:29 +0000 (0:00:00.372) 0:00:32.532 ******** 2026-04-09 06:58:30.238722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 06:58:30.238744 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:58:30.238766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 06:59:40.666715 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:59:40.666893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 06:59:40.666963 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:59:40.666985 | orchestrator | 2026-04-09 06:59:40.667008 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-09 06:59:40.667026 | orchestrator | Thursday 09 April 2026 06:58:31 +0000 (0:00:01.514) 0:00:34.047 ******** 2026-04-09 06:59:40.667044 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:59:40.667063 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:59:40.667080 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:59:40.667097 | orchestrator | 2026-04-09 06:59:40.667115 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-09 06:59:40.667133 | orchestrator | Thursday 09 April 2026 06:58:31 +0000 (0:00:00.371) 0:00:34.418 ******** 2026-04-09 06:59:40.667152 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 06:59:40.667169 | orchestrator | 2026-04-09 06:59:40.667223 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-09 06:59:40.667243 | orchestrator | Thursday 09 April 2026 06:58:32 +0000 (0:00:00.945) 0:00:35.364 ******** 2026-04-09 06:59:40.667263 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:59:40.667283 | orchestrator | 2026-04-09 06:59:40.667302 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-09 06:59:40.667321 | orchestrator | Thursday 09 April 2026 06:59:09 +0000 (0:00:36.843) 0:01:12.207 ******** 2026-04-09 06:59:40.667339 | orchestrator | 2026-04-09 06:59:40.667358 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-09 06:59:40.667377 | orchestrator | Thursday 09 April 2026 06:59:09 +0000 (0:00:00.238) 0:01:12.446 ******** 2026-04-09 06:59:40.667395 | orchestrator | 2026-04-09 06:59:40.667412 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-09 06:59:40.667429 | orchestrator | Thursday 09 April 2026 06:59:09 +0000 (0:00:00.086) 0:01:12.532 ******** 2026-04-09 06:59:40.667448 | orchestrator | 2026-04-09 06:59:40.667468 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-09 06:59:40.667487 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-09 06:59:40.667505 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-09 06:59:40.667545 | orchestrator | Thursday 09 April 2026 06:59:09 +0000 (0:00:00.073) 0:01:12.605 ******** 2026-04-09 06:59:40.667564 | orchestrator | changed: [testbed-node-0] 2026-04-09 06:59:40.667584 | orchestrator | changed: [testbed-node-2] 2026-04-09 06:59:40.667602 | orchestrator | changed: [testbed-node-1] 2026-04-09 06:59:40.667619 | orchestrator | 2026-04-09 06:59:40.667663 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 06:59:40.667686 | orchestrator | testbed-node-0 : ok=36  changed=6  unreachable=0 failed=0 skipped=26  rescued=0 ignored=0 2026-04-09 06:59:40.667707 | orchestrator | testbed-node-1 : ok=35  changed=5  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-09 06:59:40.667744 | orchestrator | testbed-node-2 : ok=35  changed=5  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-09 06:59:40.667763 | orchestrator | 2026-04-09 06:59:40.667781 | orchestrator | 2026-04-09 06:59:40.667800 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 06:59:40.667818 | orchestrator | Thursday 09 April 2026 06:59:40 +0000 (0:00:30.444) 0:01:43.050 ******** 2026-04-09 06:59:40.667837 | orchestrator | =============================================================================== 2026-04-09 06:59:40.667856 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 36.84s 2026-04-09 06:59:40.667873 | orchestrator | horizon : Restart horizon container ------------------------------------ 30.45s 2026-04-09 06:59:40.667890 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.87s 2026-04-09 06:59:40.667907 | orchestrator | service-check-containers : horizon | Check containers ------------------- 1.85s 2026-04-09 06:59:40.667925 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.84s 2026-04-09 06:59:40.667942 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.82s 2026-04-09 06:59:40.667961 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.77s 2026-04-09 06:59:40.667979 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.71s 2026-04-09 06:59:40.667996 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.68s 2026-04-09 06:59:40.668012 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.64s 2026-04-09 06:59:40.668042 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.51s 2026-04-09 06:59:40.668062 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.34s 2026-04-09 06:59:40.668082 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.31s 2026-04-09 06:59:40.668101 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.15s 2026-04-09 06:59:40.668120 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.09s 2026-04-09 06:59:40.668136 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.97s 2026-04-09 06:59:40.668152 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.95s 2026-04-09 06:59:40.668169 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.74s 2026-04-09 06:59:40.668221 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2026-04-09 06:59:40.668239 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2026-04-09 06:59:40.846973 | orchestrator | + osism apply -a upgrade skyline 2026-04-09 06:59:42.146473 | orchestrator | 2026-04-09 06:59:42 | INFO  | Prepare task for execution of skyline. 2026-04-09 06:59:42.214323 | orchestrator | 2026-04-09 06:59:42 | INFO  | Task 7957986a-6ae6-45f0-8bb9-3e8d774217ba (skyline) was prepared for execution. 2026-04-09 06:59:42.214471 | orchestrator | 2026-04-09 06:59:42 | INFO  | It takes a moment until task 7957986a-6ae6-45f0-8bb9-3e8d774217ba (skyline) has been started and output is visible here. 2026-04-09 06:59:52.433085 | orchestrator | 2026-04-09 06:59:52.433273 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 06:59:52.433296 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-09 06:59:52.433309 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-09 06:59:52.433333 | orchestrator | 2026-04-09 06:59:52.433345 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 06:59:52.433356 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-09 06:59:52.433394 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-09 06:59:52.433418 | orchestrator | Thursday 09 April 2026 06:59:46 +0000 (0:00:01.250) 0:00:01.250 ******** 2026-04-09 06:59:52.433429 | orchestrator | ok: [testbed-node-0] 2026-04-09 06:59:52.433441 | orchestrator | ok: [testbed-node-1] 2026-04-09 06:59:52.433452 | orchestrator | ok: [testbed-node-2] 2026-04-09 06:59:52.433463 | orchestrator | 2026-04-09 06:59:52.433474 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 06:59:52.433485 | orchestrator | Thursday 09 April 2026 06:59:47 +0000 (0:00:00.715) 0:00:01.965 ******** 2026-04-09 06:59:52.433497 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-04-09 06:59:52.433508 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-04-09 06:59:52.433519 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-04-09 06:59:52.433530 | orchestrator | 2026-04-09 06:59:52.433542 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-04-09 06:59:52.433553 | orchestrator | 2026-04-09 06:59:52.433564 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-09 06:59:52.433575 | orchestrator | Thursday 09 April 2026 06:59:48 +0000 (0:00:00.887) 0:00:02.853 ******** 2026-04-09 06:59:52.433586 | orchestrator | included: /ansible/roles/skyline/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 06:59:52.433599 | orchestrator | 2026-04-09 06:59:52.433612 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-04-09 06:59:52.433625 | orchestrator | Thursday 09 April 2026 06:59:49 +0000 (0:00:01.444) 0:00:04.298 ******** 2026-04-09 06:59:52.433644 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-09 06:59:52.433678 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-09 06:59:52.433716 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-09 06:59:52.433741 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 06:59:52.433756 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 06:59:52.433776 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 06:59:52.433798 | orchestrator | 2026-04-09 06:59:52.433812 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-09 06:59:52.433825 | orchestrator | Thursday 09 April 2026 06:59:51 +0000 (0:00:02.010) 0:00:06.309 ******** 2026-04-09 06:59:52.433845 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 06:59:55.711435 | orchestrator | 2026-04-09 06:59:55.711542 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-04-09 06:59:55.711559 | orchestrator | Thursday 09 April 2026 06:59:53 +0000 (0:00:01.165) 0:00:07.474 ******** 2026-04-09 06:59:55.711578 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-09 06:59:55.711595 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-09 06:59:55.711626 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-09 06:59:55.711661 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 06:59:55.711698 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 06:59:55.711712 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 06:59:55.711724 | orchestrator | 2026-04-09 06:59:55.711736 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-04-09 06:59:55.711748 | orchestrator | Thursday 09 April 2026 06:59:55 +0000 (0:00:02.305) 0:00:09.779 ******** 2026-04-09 06:59:55.711765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-09 06:59:55.711795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 06:59:56.447030 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:59:56.447138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-09 06:59:56.447160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 06:59:56.447175 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:59:56.447204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-09 06:59:56.447328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 06:59:56.447343 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:59:56.447355 | orchestrator | 2026-04-09 06:59:56.447380 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-04-09 06:59:56.447392 | orchestrator | Thursday 09 April 2026 06:59:55 +0000 (0:00:00.620) 0:00:10.400 ******** 2026-04-09 06:59:56.447404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-09 06:59:56.447423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 06:59:56.447444 | orchestrator | skipping: [testbed-node-0] 2026-04-09 06:59:56.447456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-09 06:59:56.447477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 06:59:59.643882 | orchestrator | skipping: [testbed-node-1] 2026-04-09 06:59:59.643988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-09 06:59:59.644024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 06:59:59.644066 | orchestrator | skipping: [testbed-node-2] 2026-04-09 06:59:59.644085 | orchestrator | 2026-04-09 06:59:59.644102 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-04-09 06:59:59.644120 | orchestrator | Thursday 09 April 2026 06:59:57 +0000 (0:00:01.105) 0:00:11.506 ******** 2026-04-09 06:59:59.644137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-09 06:59:59.644179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-09 06:59:59.644200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-09 06:59:59.644283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 06:59:59.644308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 06:59:59.644329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 07:00:07.885815 | orchestrator | 2026-04-09 07:00:07.885913 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-04-09 07:00:07.885930 | orchestrator | Thursday 09 April 2026 06:59:59 +0000 (0:00:02.636) 0:00:14.142 ******** 2026-04-09 07:00:07.885942 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-09 07:00:07.885953 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-09 07:00:07.885965 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-09 07:00:07.885976 | orchestrator | 2026-04-09 07:00:07.885987 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-04-09 07:00:07.885998 | orchestrator | Thursday 09 April 2026 07:00:01 +0000 (0:00:01.539) 0:00:15.682 ******** 2026-04-09 07:00:07.886009 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-09 07:00:07.886100 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-09 07:00:07.886114 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-09 07:00:07.886126 | orchestrator | 2026-04-09 07:00:07.886137 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-04-09 07:00:07.886148 | orchestrator | Thursday 09 April 2026 07:00:03 +0000 (0:00:01.972) 0:00:17.655 ******** 2026-04-09 07:00:07.886175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-09 07:00:07.886192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-09 07:00:07.886262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-09 07:00:07.886279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 07:00:07.886306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 07:00:07.886319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 07:00:07.886331 | orchestrator | 2026-04-09 07:00:07.886343 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-04-09 07:00:07.886355 | orchestrator | Thursday 09 April 2026 07:00:05 +0000 (0:00:02.708) 0:00:20.364 ******** 2026-04-09 07:00:07.886366 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:00:07.886378 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:00:07.886390 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:00:07.886401 | orchestrator | 2026-04-09 07:00:07.886412 | orchestrator | TASK [service-check-containers : skyline | Check containers] ******************* 2026-04-09 07:00:07.886423 | orchestrator | Thursday 09 April 2026 07:00:06 +0000 (0:00:00.705) 0:00:21.069 ******** 2026-04-09 07:00:07.886445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-09 07:00:09.814557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-09 07:00:09.815330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-09 07:00:09.815376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 07:00:09.815413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 07:00:09.815457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 07:00:09.815471 | orchestrator | 2026-04-09 07:00:09.815484 | orchestrator | TASK [service-check-containers : skyline | Notify handlers to restart containers] *** 2026-04-09 07:00:09.815496 | orchestrator | Thursday 09 April 2026 07:00:08 +0000 (0:00:02.295) 0:00:23.365 ******** 2026-04-09 07:00:09.815508 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 07:00:09.815520 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:00:09.815531 | orchestrator | } 2026-04-09 07:00:09.815542 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 07:00:09.815554 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:00:09.815565 | orchestrator | } 2026-04-09 07:00:09.815577 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 07:00:09.815588 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:00:09.815599 | orchestrator | } 2026-04-09 07:00:09.815611 | orchestrator | 2026-04-09 07:00:09.815622 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 07:00:09.815633 | orchestrator | Thursday 09 April 2026 07:00:09 +0000 (0:00:00.456) 0:00:23.822 ******** 2026-04-09 07:00:09.815645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-09 07:00:09.815659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 07:00:09.815677 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:00:09.815702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-09 07:00:46.621176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 07:00:46.621365 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:00:46.621401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-09 07:00:46.621457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 07:00:46.621478 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:00:46.621490 | orchestrator | 2026-04-09 07:00:46.621503 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-09 07:00:46.621515 | orchestrator | Thursday 09 April 2026 07:00:10 +0000 (0:00:01.231) 0:00:25.054 ******** 2026-04-09 07:00:46.621528 | orchestrator | 2026-04-09 07:00:46.621547 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-09 07:00:46.621566 | orchestrator | Thursday 09 April 2026 07:00:10 +0000 (0:00:00.083) 0:00:25.137 ******** 2026-04-09 07:00:46.621586 | orchestrator | 2026-04-09 07:00:46.621604 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-09 07:00:46.621621 | orchestrator | Thursday 09 April 2026 07:00:10 +0000 (0:00:00.068) 0:00:25.205 ******** 2026-04-09 07:00:46.621636 | orchestrator | 2026-04-09 07:00:46.621654 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-04-09 07:00:46.621672 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-09 07:00:46.621692 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-09 07:00:46.621773 | orchestrator | Thursday 09 April 2026 07:00:10 +0000 (0:00:00.069) 0:00:25.274 ******** 2026-04-09 07:00:46.621789 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:00:46.621802 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:00:46.621815 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:00:46.621829 | orchestrator | 2026-04-09 07:00:46.621842 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-04-09 07:00:46.621855 | orchestrator | Thursday 09 April 2026 07:00:29 +0000 (0:00:18.291) 0:00:43.566 ******** 2026-04-09 07:00:46.621868 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:00:46.621881 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:00:46.621894 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:00:46.621907 | orchestrator | 2026-04-09 07:00:46.621921 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 07:00:46.621935 | orchestrator | testbed-node-0 : ok=14  changed=7  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 07:00:46.621950 | orchestrator | testbed-node-1 : ok=14  changed=7  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 07:00:46.621963 | orchestrator | testbed-node-2 : ok=14  changed=7  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 07:00:46.621975 | orchestrator | 2026-04-09 07:00:46.621997 | orchestrator | 2026-04-09 07:00:46.622010 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 07:00:46.622088 | orchestrator | Thursday 09 April 2026 07:00:46 +0000 (0:00:17.108) 0:01:00.675 ******** 2026-04-09 07:00:46.622102 | orchestrator | =============================================================================== 2026-04-09 07:00:46.622115 | orchestrator | skyline : Restart skyline-apiserver container -------------------------- 18.29s 2026-04-09 07:00:46.622126 | orchestrator | skyline : Restart skyline-console container ---------------------------- 17.11s 2026-04-09 07:00:46.622137 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.71s 2026-04-09 07:00:46.622156 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.64s 2026-04-09 07:00:46.622175 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.31s 2026-04-09 07:00:46.622196 | orchestrator | service-check-containers : skyline | Check containers ------------------- 2.30s 2026-04-09 07:00:46.622214 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 2.01s 2026-04-09 07:00:46.622230 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 1.97s 2026-04-09 07:00:46.622241 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.54s 2026-04-09 07:00:46.622252 | orchestrator | skyline : include_tasks ------------------------------------------------- 1.44s 2026-04-09 07:00:46.622263 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.23s 2026-04-09 07:00:46.622279 | orchestrator | skyline : include_tasks ------------------------------------------------- 1.16s 2026-04-09 07:00:46.622461 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.11s 2026-04-09 07:00:46.622478 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.89s 2026-04-09 07:00:46.622489 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.72s 2026-04-09 07:00:46.622500 | orchestrator | skyline : Copying over custom logos ------------------------------------- 0.71s 2026-04-09 07:00:46.622511 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS certificate --- 0.62s 2026-04-09 07:00:46.622522 | orchestrator | service-check-containers : skyline | Notify handlers to restart containers --- 0.46s 2026-04-09 07:00:46.622533 | orchestrator | skyline : Flush handlers ------------------------------------------------ 0.22s 2026-04-09 07:00:46.805628 | orchestrator | + osism apply -a upgrade glance 2026-04-09 07:00:48.118280 | orchestrator | 2026-04-09 07:00:48 | INFO  | Prepare task for execution of glance. 2026-04-09 07:00:48.186660 | orchestrator | 2026-04-09 07:00:48 | INFO  | Task 3a1b2fd1-0179-49e1-b463-49217c97bf0b (glance) was prepared for execution. 2026-04-09 07:00:48.186758 | orchestrator | 2026-04-09 07:00:48 | INFO  | It takes a moment until task 3a1b2fd1-0179-49e1-b463-49217c97bf0b (glance) has been started and output is visible here. 2026-04-09 07:01:14.254207 | orchestrator | 2026-04-09 07:01:14.254397 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 07:01:14.254420 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-09 07:01:14.254434 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-09 07:01:14.254457 | orchestrator | 2026-04-09 07:01:14.254469 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 07:01:14.254480 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-09 07:01:14.254491 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-09 07:01:14.254513 | orchestrator | Thursday 09 April 2026 07:00:52 +0000 (0:00:01.171) 0:00:01.172 ******** 2026-04-09 07:01:14.254526 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:01:14.254564 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:01:14.254581 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:01:14.254600 | orchestrator | 2026-04-09 07:01:14.254638 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 07:01:14.254658 | orchestrator | Thursday 09 April 2026 07:00:53 +0000 (0:00:00.947) 0:00:02.119 ******** 2026-04-09 07:01:14.254678 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-09 07:01:14.254698 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-09 07:01:14.254716 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-09 07:01:14.254735 | orchestrator | 2026-04-09 07:01:14.254749 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-09 07:01:14.254762 | orchestrator | 2026-04-09 07:01:14.254776 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-09 07:01:14.254789 | orchestrator | Thursday 09 April 2026 07:00:54 +0000 (0:00:00.745) 0:00:02.865 ******** 2026-04-09 07:01:14.254803 | orchestrator | included: /ansible/roles/glance/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 07:01:14.254817 | orchestrator | 2026-04-09 07:01:14.254831 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-09 07:01:14.254845 | orchestrator | Thursday 09 April 2026 07:00:55 +0000 (0:00:01.225) 0:00:04.091 ******** 2026-04-09 07:01:14.254859 | orchestrator | included: /ansible/roles/glance/tasks/rolling_upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 07:01:14.254872 | orchestrator | 2026-04-09 07:01:14.254885 | orchestrator | TASK [glance : Start Glance upgrade] ******************************************* 2026-04-09 07:01:14.254898 | orchestrator | Thursday 09 April 2026 07:00:56 +0000 (0:00:01.271) 0:00:05.362 ******** 2026-04-09 07:01:14.254910 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:01:14.254923 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:01:14.254935 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:01:14.254949 | orchestrator | 2026-04-09 07:01:14.254961 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-09 07:01:14.254974 | orchestrator | Thursday 09 April 2026 07:00:57 +0000 (0:00:00.573) 0:00:05.936 ******** 2026-04-09 07:01:14.254988 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:01:14.255003 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:01:14.255016 | orchestrator | included: /ansible/roles/glance/tasks/config.yml for testbed-node-0 2026-04-09 07:01:14.255029 | orchestrator | 2026-04-09 07:01:14.255042 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-09 07:01:14.255056 | orchestrator | Thursday 09 April 2026 07:00:58 +0000 (0:00:00.836) 0:00:06.772 ******** 2026-04-09 07:01:14.255098 | orchestrator | ok: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 07:01:14.255125 | orchestrator | 2026-04-09 07:01:14.255136 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-09 07:01:14.255147 | orchestrator | Thursday 09 April 2026 07:01:02 +0000 (0:00:03.751) 0:00:10.524 ******** 2026-04-09 07:01:14.255158 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0 2026-04-09 07:01:14.255169 | orchestrator | 2026-04-09 07:01:14.255180 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-09 07:01:14.255191 | orchestrator | Thursday 09 April 2026 07:01:02 +0000 (0:00:00.581) 0:00:11.105 ******** 2026-04-09 07:01:14.255202 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:01:14.255213 | orchestrator | 2026-04-09 07:01:14.255224 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-09 07:01:14.255235 | orchestrator | Thursday 09 April 2026 07:01:06 +0000 (0:00:03.667) 0:00:14.772 ******** 2026-04-09 07:01:14.255252 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-09 07:01:14.255265 | orchestrator | 2026-04-09 07:01:14.255277 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-09 07:01:14.255287 | orchestrator | Thursday 09 April 2026 07:01:07 +0000 (0:00:01.507) 0:00:16.280 ******** 2026-04-09 07:01:14.255298 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-09 07:01:14.255310 | orchestrator | 2026-04-09 07:01:14.255352 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-09 07:01:14.255366 | orchestrator | Thursday 09 April 2026 07:01:08 +0000 (0:00:00.994) 0:00:17.274 ******** 2026-04-09 07:01:14.255377 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:01:14.255388 | orchestrator | 2026-04-09 07:01:14.255399 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-09 07:01:14.255410 | orchestrator | Thursday 09 April 2026 07:01:09 +0000 (0:00:00.708) 0:00:17.982 ******** 2026-04-09 07:01:14.255421 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:01:14.255432 | orchestrator | 2026-04-09 07:01:14.255443 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-09 07:01:14.255454 | orchestrator | Thursday 09 April 2026 07:01:09 +0000 (0:00:00.134) 0:00:18.117 ******** 2026-04-09 07:01:14.255465 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:01:14.255476 | orchestrator | 2026-04-09 07:01:14.255487 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-09 07:01:14.255497 | orchestrator | Thursday 09 April 2026 07:01:09 +0000 (0:00:00.125) 0:00:18.243 ******** 2026-04-09 07:01:14.255508 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0 2026-04-09 07:01:14.255519 | orchestrator | 2026-04-09 07:01:14.255530 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-09 07:01:14.255541 | orchestrator | Thursday 09 April 2026 07:01:10 +0000 (0:00:00.633) 0:00:18.876 ******** 2026-04-09 07:01:14.255554 | orchestrator | ok: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 07:01:14.255573 | orchestrator | 2026-04-09 07:01:14.255585 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-09 07:01:14.255604 | orchestrator | Thursday 09 April 2026 07:01:14 +0000 (0:00:03.826) 0:00:22.703 ******** 2026-04-09 07:02:06.576589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 07:02:06.576700 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:02:06.576715 | orchestrator | 2026-04-09 07:02:06.576725 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-09 07:02:06.576736 | orchestrator | Thursday 09 April 2026 07:01:17 +0000 (0:00:03.064) 0:00:25.768 ******** 2026-04-09 07:02:06.576746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 07:02:06.576778 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:02:06.576787 | orchestrator | 2026-04-09 07:02:06.576796 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-09 07:02:06.576804 | orchestrator | Thursday 09 April 2026 07:01:20 +0000 (0:00:03.196) 0:00:28.964 ******** 2026-04-09 07:02:06.576813 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:02:06.576821 | orchestrator | 2026-04-09 07:02:06.576829 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-09 07:02:06.576854 | orchestrator | Thursday 09 April 2026 07:01:23 +0000 (0:00:03.254) 0:00:32.218 ******** 2026-04-09 07:02:06.576871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 07:02:06.576881 | orchestrator | 2026-04-09 07:02:06.576890 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-09 07:02:06.576898 | orchestrator | Thursday 09 April 2026 07:01:27 +0000 (0:00:04.217) 0:00:36.435 ******** 2026-04-09 07:02:06.576906 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:02:06.576913 | orchestrator | 2026-04-09 07:02:06.576921 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-09 07:02:06.576937 | orchestrator | Thursday 09 April 2026 07:01:33 +0000 (0:00:05.842) 0:00:42.277 ******** 2026-04-09 07:02:06.576945 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:02:06.576953 | orchestrator | 2026-04-09 07:02:06.576961 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-09 07:02:06.576969 | orchestrator | Thursday 09 April 2026 07:01:36 +0000 (0:00:03.125) 0:00:45.403 ******** 2026-04-09 07:02:06.576977 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:02:06.576985 | orchestrator | 2026-04-09 07:02:06.576993 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-09 07:02:06.577001 | orchestrator | Thursday 09 April 2026 07:01:40 +0000 (0:00:03.200) 0:00:48.604 ******** 2026-04-09 07:02:06.577009 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:02:06.577017 | orchestrator | 2026-04-09 07:02:06.577025 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-09 07:02:06.577033 | orchestrator | Thursday 09 April 2026 07:01:43 +0000 (0:00:03.250) 0:00:51.855 ******** 2026-04-09 07:02:06.577041 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:02:06.577049 | orchestrator | 2026-04-09 07:02:06.577056 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-09 07:02:06.577064 | orchestrator | Thursday 09 April 2026 07:01:43 +0000 (0:00:00.142) 0:00:51.998 ******** 2026-04-09 07:02:06.577072 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-09 07:02:06.577082 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:02:06.577090 | orchestrator | 2026-04-09 07:02:06.577098 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-09 07:02:06.577106 | orchestrator | Thursday 09 April 2026 07:01:46 +0000 (0:00:03.278) 0:00:55.276 ******** 2026-04-09 07:02:06.577115 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:02:06.577125 | orchestrator | 2026-04-09 07:02:06.577135 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-04-09 07:02:06.577144 | orchestrator | Thursday 09 April 2026 07:01:50 +0000 (0:00:03.347) 0:00:58.624 ******** 2026-04-09 07:02:06.577154 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:02:06.577163 | orchestrator | 2026-04-09 07:02:06.577172 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-09 07:02:06.577180 | orchestrator | Thursday 09 April 2026 07:01:53 +0000 (0:00:03.263) 0:01:01.888 ******** 2026-04-09 07:02:06.577190 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:02:06.577199 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:02:06.577209 | orchestrator | included: /ansible/roles/glance/tasks/stop_service.yml for testbed-node-0 2026-04-09 07:02:06.577220 | orchestrator | 2026-04-09 07:02:06.577229 | orchestrator | TASK [glance : Stop glance service] ******************************************** 2026-04-09 07:02:06.577239 | orchestrator | Thursday 09 April 2026 07:01:54 +0000 (0:00:00.975) 0:01:02.863 ******** 2026-04-09 07:02:06.577248 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:02:06.577258 | orchestrator | 2026-04-09 07:02:06.577266 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-09 07:02:06.577282 | orchestrator | Thursday 09 April 2026 07:02:06 +0000 (0:00:12.160) 0:01:15.024 ******** 2026-04-09 07:03:08.688504 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:03:08.688622 | orchestrator | 2026-04-09 07:03:08.688639 | orchestrator | TASK [glance : Running Glance database expand container] *********************** 2026-04-09 07:03:08.688652 | orchestrator | Thursday 09 April 2026 07:02:08 +0000 (0:00:02.293) 0:01:17.317 ******** 2026-04-09 07:03:08.688664 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:03:08.688675 | orchestrator | 2026-04-09 07:03:08.688686 | orchestrator | TASK [glance : Running Glance database migrate container] ********************** 2026-04-09 07:03:08.688698 | orchestrator | Thursday 09 April 2026 07:02:36 +0000 (0:00:27.644) 0:01:44.962 ******** 2026-04-09 07:03:08.688709 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:03:08.688720 | orchestrator | 2026-04-09 07:03:08.688731 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-09 07:03:08.688768 | orchestrator | Thursday 09 April 2026 07:02:52 +0000 (0:00:15.535) 0:02:00.498 ******** 2026-04-09 07:03:08.688780 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:03:08.688806 | orchestrator | included: /ansible/roles/glance/tasks/config.yml for testbed-node-1, testbed-node-2 2026-04-09 07:03:08.688819 | orchestrator | 2026-04-09 07:03:08.688830 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-09 07:03:08.688841 | orchestrator | Thursday 09 April 2026 07:02:52 +0000 (0:00:00.523) 0:02:01.021 ******** 2026-04-09 07:03:08.688858 | orchestrator | ok: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 07:03:08.688895 | orchestrator | ok: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 07:03:08.688917 | orchestrator | 2026-04-09 07:03:08.688929 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-09 07:03:08.688940 | orchestrator | Thursday 09 April 2026 07:02:56 +0000 (0:00:04.110) 0:02:05.132 ******** 2026-04-09 07:03:08.688952 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-1, testbed-node-2 2026-04-09 07:03:08.688964 | orchestrator | 2026-04-09 07:03:08.688975 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-09 07:03:08.688992 | orchestrator | Thursday 09 April 2026 07:02:57 +0000 (0:00:00.424) 0:02:05.556 ******** 2026-04-09 07:03:08.689005 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:03:08.689018 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:03:08.689032 | orchestrator | 2026-04-09 07:03:08.689045 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-09 07:03:08.689058 | orchestrator | Thursday 09 April 2026 07:03:00 +0000 (0:00:03.735) 0:02:09.292 ******** 2026-04-09 07:03:08.689071 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-09 07:03:08.689086 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-09 07:03:08.689100 | orchestrator | 2026-04-09 07:03:08.689112 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-09 07:03:08.689123 | orchestrator | Thursday 09 April 2026 07:03:02 +0000 (0:00:01.298) 0:02:10.591 ******** 2026-04-09 07:03:08.689134 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-09 07:03:08.689145 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-09 07:03:08.689156 | orchestrator | 2026-04-09 07:03:08.689167 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-09 07:03:08.689179 | orchestrator | Thursday 09 April 2026 07:03:03 +0000 (0:00:01.138) 0:02:11.729 ******** 2026-04-09 07:03:08.689190 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:03:08.689300 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:03:08.689313 | orchestrator | 2026-04-09 07:03:08.689324 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-09 07:03:08.689336 | orchestrator | Thursday 09 April 2026 07:03:04 +0000 (0:00:00.791) 0:02:12.520 ******** 2026-04-09 07:03:08.689347 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:03:08.689358 | orchestrator | 2026-04-09 07:03:08.689369 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-09 07:03:08.689380 | orchestrator | Thursday 09 April 2026 07:03:04 +0000 (0:00:00.122) 0:02:12.643 ******** 2026-04-09 07:03:08.689391 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:03:08.689402 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:03:08.689413 | orchestrator | 2026-04-09 07:03:08.689446 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-09 07:03:08.689457 | orchestrator | Thursday 09 April 2026 07:03:04 +0000 (0:00:00.210) 0:02:12.853 ******** 2026-04-09 07:03:08.689468 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-1, testbed-node-2 2026-04-09 07:03:08.689480 | orchestrator | 2026-04-09 07:03:08.689490 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-09 07:03:08.689502 | orchestrator | Thursday 09 April 2026 07:03:04 +0000 (0:00:00.406) 0:02:13.260 ******** 2026-04-09 07:03:08.689528 | orchestrator | ok: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 07:03:15.041068 | orchestrator | ok: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 07:03:15.041160 | orchestrator | 2026-04-09 07:03:15.041173 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-09 07:03:15.041182 | orchestrator | Thursday 09 April 2026 07:03:08 +0000 (0:00:04.011) 0:02:17.271 ******** 2026-04-09 07:03:15.041192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 07:03:15.041217 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:03:15.041244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 07:03:15.041254 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:03:15.041261 | orchestrator | 2026-04-09 07:03:15.041269 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-09 07:03:15.041276 | orchestrator | Thursday 09 April 2026 07:03:12 +0000 (0:00:03.250) 0:02:20.522 ******** 2026-04-09 07:03:15.041284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 07:03:15.041298 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:03:15.041316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 07:03:54.313930 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:03:54.314075 | orchestrator | 2026-04-09 07:03:54.314091 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-09 07:03:54.314100 | orchestrator | Thursday 09 April 2026 07:03:15 +0000 (0:00:03.219) 0:02:23.741 ******** 2026-04-09 07:03:54.314108 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:03:54.314116 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:03:54.314124 | orchestrator | 2026-04-09 07:03:54.314130 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-09 07:03:54.314135 | orchestrator | Thursday 09 April 2026 07:03:18 +0000 (0:00:03.415) 0:02:27.157 ******** 2026-04-09 07:03:54.314143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 07:03:54.314177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 07:03:54.314183 | orchestrator | 2026-04-09 07:03:54.314188 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-09 07:03:54.314204 | orchestrator | Thursday 09 April 2026 07:03:22 +0000 (0:00:04.130) 0:02:31.288 ******** 2026-04-09 07:03:54.314209 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:03:54.314213 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:03:54.314218 | orchestrator | 2026-04-09 07:03:54.314222 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-09 07:03:54.314227 | orchestrator | Thursday 09 April 2026 07:03:28 +0000 (0:00:05.995) 0:02:37.283 ******** 2026-04-09 07:03:54.314231 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:03:54.314235 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:03:54.314240 | orchestrator | 2026-04-09 07:03:54.314244 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-09 07:03:54.314248 | orchestrator | Thursday 09 April 2026 07:03:32 +0000 (0:00:03.514) 0:02:40.797 ******** 2026-04-09 07:03:54.314252 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:03:54.314257 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:03:54.314265 | orchestrator | 2026-04-09 07:03:54.314270 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-09 07:03:54.314274 | orchestrator | Thursday 09 April 2026 07:03:35 +0000 (0:00:03.334) 0:02:44.132 ******** 2026-04-09 07:03:54.314278 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:03:54.314283 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:03:54.314287 | orchestrator | 2026-04-09 07:03:54.314291 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-09 07:03:54.314296 | orchestrator | Thursday 09 April 2026 07:03:39 +0000 (0:00:03.443) 0:02:47.576 ******** 2026-04-09 07:03:54.314300 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:03:54.314304 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:03:54.314309 | orchestrator | 2026-04-09 07:03:54.314313 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-09 07:03:54.314318 | orchestrator | Thursday 09 April 2026 07:03:39 +0000 (0:00:00.248) 0:02:47.825 ******** 2026-04-09 07:03:54.314322 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-09 07:03:54.314327 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:03:54.314332 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-09 07:03:54.314336 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:03:54.314341 | orchestrator | 2026-04-09 07:03:54.314345 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-09 07:03:54.314350 | orchestrator | Thursday 09 April 2026 07:03:42 +0000 (0:00:03.542) 0:02:51.367 ******** 2026-04-09 07:03:54.314354 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:03:54.314359 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:03:54.314363 | orchestrator | 2026-04-09 07:03:54.314367 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-04-09 07:03:54.314372 | orchestrator | Thursday 09 April 2026 07:03:46 +0000 (0:00:03.695) 0:02:55.063 ******** 2026-04-09 07:03:54.314376 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:03:54.314380 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:03:54.314385 | orchestrator | 2026-04-09 07:03:54.314389 | orchestrator | TASK [service-check-containers : glance | Check containers] ******************** 2026-04-09 07:03:54.314393 | orchestrator | Thursday 09 April 2026 07:03:50 +0000 (0:00:03.682) 0:02:58.746 ******** 2026-04-09 07:03:54.314402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 07:03:54.314416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 07:03:59.239024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 07:03:59.239123 | orchestrator | 2026-04-09 07:03:59.239155 | orchestrator | TASK [service-check-containers : glance | Notify handlers to restart containers] *** 2026-04-09 07:03:59.239168 | orchestrator | Thursday 09 April 2026 07:03:54 +0000 (0:00:04.431) 0:03:03.177 ******** 2026-04-09 07:03:59.239180 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 07:03:59.239192 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:03:59.239223 | orchestrator | } 2026-04-09 07:03:59.239234 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 07:03:59.239244 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:03:59.239254 | orchestrator | } 2026-04-09 07:03:59.239263 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 07:03:59.239273 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:03:59.239283 | orchestrator | } 2026-04-09 07:03:59.239293 | orchestrator | 2026-04-09 07:03:59.239304 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 07:03:59.239313 | orchestrator | Thursday 09 April 2026 07:03:55 +0000 (0:00:00.360) 0:03:03.537 ******** 2026-04-09 07:03:59.239340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 07:03:59.239353 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:03:59.239370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 07:03:59.239388 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:03:59.239399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 07:03:59.239409 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:03:59.239419 | orchestrator | 2026-04-09 07:03:59.239429 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-09 07:03:59.239441 | orchestrator | Thursday 09 April 2026 07:03:59 +0000 (0:00:03.980) 0:03:07.518 ******** 2026-04-09 07:03:59.239458 | orchestrator | 2026-04-09 07:03:59.239579 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-09 07:03:59.239597 | orchestrator | Thursday 09 April 2026 07:03:59 +0000 (0:00:00.083) 0:03:07.602 ******** 2026-04-09 07:03:59.239614 | orchestrator | 2026-04-09 07:03:59.239632 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-09 07:03:59.239661 | orchestrator | Thursday 09 April 2026 07:03:59 +0000 (0:00:00.088) 0:03:07.690 ******** 2026-04-09 07:04:57.892722 | orchestrator | 2026-04-09 07:04:57.892843 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-09 07:04:57.892861 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-09 07:04:57.892874 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-09 07:04:57.892897 | orchestrator | Thursday 09 April 2026 07:03:59 +0000 (0:00:00.077) 0:03:07.768 ******** 2026-04-09 07:04:57.892909 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:04:57.892921 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:04:57.892932 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:04:57.892943 | orchestrator | 2026-04-09 07:04:57.892955 | orchestrator | TASK [glance : Running Glance database contract container] ********************* 2026-04-09 07:04:57.892966 | orchestrator | Thursday 09 April 2026 07:04:37 +0000 (0:00:38.165) 0:03:45.933 ******** 2026-04-09 07:04:57.892977 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:04:57.892988 | orchestrator | 2026-04-09 07:04:57.892999 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-09 07:04:57.893037 | orchestrator | Thursday 09 April 2026 07:04:53 +0000 (0:00:15.691) 0:04:01.624 ******** 2026-04-09 07:04:57.893050 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:04:57.893060 | orchestrator | 2026-04-09 07:04:57.893072 | orchestrator | TASK [glance : Finish Glance upgrade] ****************************************** 2026-04-09 07:04:57.893083 | orchestrator | Thursday 09 April 2026 07:04:55 +0000 (0:00:02.649) 0:04:04.274 ******** 2026-04-09 07:04:57.893093 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:04:57.893105 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:04:57.893116 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:04:57.893127 | orchestrator | 2026-04-09 07:04:57.893138 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-09 07:04:57.893149 | orchestrator | Thursday 09 April 2026 07:04:56 +0000 (0:00:00.353) 0:04:04.627 ******** 2026-04-09 07:04:57.893160 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:04:57.893171 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:04:57.893182 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:04:57.893192 | orchestrator | 2026-04-09 07:04:57.893203 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 07:04:57.893215 | orchestrator | testbed-node-0 : ok=27  changed=11  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-09 07:04:57.893243 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-09 07:04:57.893255 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-09 07:04:57.893267 | orchestrator | 2026-04-09 07:04:57.893280 | orchestrator | 2026-04-09 07:04:57.893294 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 07:04:57.893307 | orchestrator | Thursday 09 April 2026 07:04:57 +0000 (0:00:01.274) 0:04:05.902 ******** 2026-04-09 07:04:57.893321 | orchestrator | =============================================================================== 2026-04-09 07:04:57.893334 | orchestrator | glance : Restart glance-api container ---------------------------------- 38.17s 2026-04-09 07:04:57.893347 | orchestrator | glance : Running Glance database expand container ---------------------- 27.64s 2026-04-09 07:04:57.893360 | orchestrator | glance : Running Glance database contract container -------------------- 15.69s 2026-04-09 07:04:57.893373 | orchestrator | glance : Running Glance database migrate container --------------------- 15.54s 2026-04-09 07:04:57.893386 | orchestrator | glance : Stop glance service ------------------------------------------- 12.16s 2026-04-09 07:04:57.893400 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.00s 2026-04-09 07:04:57.893413 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.84s 2026-04-09 07:04:57.893426 | orchestrator | service-check-containers : glance | Check containers -------------------- 4.43s 2026-04-09 07:04:57.893439 | orchestrator | glance : Copying over config.json files for services -------------------- 4.22s 2026-04-09 07:04:57.893452 | orchestrator | glance : Copying over config.json files for services -------------------- 4.13s 2026-04-09 07:04:57.893465 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.11s 2026-04-09 07:04:57.893477 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.01s 2026-04-09 07:04:57.893490 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.98s 2026-04-09 07:04:57.893503 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.83s 2026-04-09 07:04:57.893536 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.75s 2026-04-09 07:04:57.893551 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.74s 2026-04-09 07:04:57.893565 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 3.70s 2026-04-09 07:04:57.893577 | orchestrator | glance : Generating 'hostid' file for glance_api ------------------------ 3.68s 2026-04-09 07:04:57.893596 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.67s 2026-04-09 07:04:57.893608 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.54s 2026-04-09 07:04:58.079420 | orchestrator | + osism apply -a upgrade cinder 2026-04-09 07:04:59.397146 | orchestrator | 2026-04-09 07:04:59 | INFO  | Prepare task for execution of cinder. 2026-04-09 07:04:59.464871 | orchestrator | 2026-04-09 07:04:59 | INFO  | Task a11b0c0d-17e5-4208-a7f9-285032899bb7 (cinder) was prepared for execution. 2026-04-09 07:04:59.464965 | orchestrator | 2026-04-09 07:04:59 | INFO  | It takes a moment until task a11b0c0d-17e5-4208-a7f9-285032899bb7 (cinder) has been started and output is visible here. 2026-04-09 07:05:21.960484 | orchestrator | 2026-04-09 07:05:21.960667 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 07:05:21.960687 | orchestrator | 2026-04-09 07:05:21.960699 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 07:05:21.960711 | orchestrator | Thursday 09 April 2026 07:05:04 +0000 (0:00:01.549) 0:00:01.549 ******** 2026-04-09 07:05:21.960723 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:05:21.960735 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:05:21.960746 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:05:21.960757 | orchestrator | 2026-04-09 07:05:21.960769 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 07:05:21.960780 | orchestrator | Thursday 09 April 2026 07:05:06 +0000 (0:00:01.827) 0:00:03.377 ******** 2026-04-09 07:05:21.960792 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-09 07:05:21.960804 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-09 07:05:21.960816 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-09 07:05:21.960827 | orchestrator | 2026-04-09 07:05:21.960838 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-09 07:05:21.960849 | orchestrator | 2026-04-09 07:05:21.960861 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-09 07:05:21.960872 | orchestrator | Thursday 09 April 2026 07:05:07 +0000 (0:00:01.706) 0:00:05.084 ******** 2026-04-09 07:05:21.960884 | orchestrator | included: /ansible/roles/cinder/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 07:05:21.960896 | orchestrator | 2026-04-09 07:05:21.960907 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-09 07:05:21.960918 | orchestrator | Thursday 09 April 2026 07:05:11 +0000 (0:00:03.177) 0:00:08.261 ******** 2026-04-09 07:05:21.960930 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:05:21.960941 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:05:21.960953 | orchestrator | included: /ansible/roles/cinder/tasks/config.yml for testbed-node-0 2026-04-09 07:05:21.960964 | orchestrator | 2026-04-09 07:05:21.960976 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-09 07:05:21.960987 | orchestrator | Thursday 09 April 2026 07:05:12 +0000 (0:00:01.782) 0:00:10.043 ******** 2026-04-09 07:05:21.961026 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:05:21.961069 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:05:21.961083 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 07:05:21.961118 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 07:05:21.961132 | orchestrator | 2026-04-09 07:05:21.961144 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-09 07:05:21.961155 | orchestrator | Thursday 09 April 2026 07:05:16 +0000 (0:00:03.441) 0:00:13.485 ******** 2026-04-09 07:05:21.961168 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:05:21.961179 | orchestrator | 2026-04-09 07:05:21.961191 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-09 07:05:21.961203 | orchestrator | Thursday 09 April 2026 07:05:17 +0000 (0:00:01.168) 0:00:14.654 ******** 2026-04-09 07:05:21.961215 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0 2026-04-09 07:05:21.961226 | orchestrator | 2026-04-09 07:05:21.961238 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-09 07:05:21.961250 | orchestrator | Thursday 09 April 2026 07:05:18 +0000 (0:00:01.459) 0:00:16.113 ******** 2026-04-09 07:05:21.961262 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-09 07:05:21.961274 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-09 07:05:21.961286 | orchestrator | 2026-04-09 07:05:21.961297 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-09 07:05:21.961309 | orchestrator | Thursday 09 April 2026 07:05:21 +0000 (0:00:02.654) 0:00:18.768 ******** 2026-04-09 07:05:21.961328 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-09 07:05:21.961352 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-09 07:05:21.961375 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-09 07:05:42.508742 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-09 07:05:42.508869 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-09 07:05:42.508905 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-09 07:05:42.508918 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-09 07:05:42.508945 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-09 07:05:42.508957 | orchestrator | 2026-04-09 07:05:42.508969 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-09 07:05:42.508981 | orchestrator | Thursday 09 April 2026 07:05:27 +0000 (0:00:06.333) 0:00:25.101 ******** 2026-04-09 07:05:42.508992 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-09 07:05:42.509011 | orchestrator | 2026-04-09 07:05:42.509028 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-09 07:05:42.509046 | orchestrator | Thursday 09 April 2026 07:05:30 +0000 (0:00:02.398) 0:00:27.500 ******** 2026-04-09 07:05:42.509062 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-09 07:05:42.509080 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-09 07:05:42.509098 | orchestrator | 2026-04-09 07:05:42.509114 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-09 07:05:42.509187 | orchestrator | Thursday 09 April 2026 07:05:33 +0000 (0:00:03.560) 0:00:31.061 ******** 2026-04-09 07:05:42.509207 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-09 07:05:42.509233 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-09 07:05:42.509244 | orchestrator | 2026-04-09 07:05:42.509254 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-09 07:05:42.509264 | orchestrator | Thursday 09 April 2026 07:05:35 +0000 (0:00:01.902) 0:00:32.963 ******** 2026-04-09 07:05:42.509273 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:05:42.509284 | orchestrator | 2026-04-09 07:05:42.509294 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-09 07:05:42.509303 | orchestrator | Thursday 09 April 2026 07:05:36 +0000 (0:00:01.138) 0:00:34.102 ******** 2026-04-09 07:05:42.509313 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:05:42.509323 | orchestrator | 2026-04-09 07:05:42.509332 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-09 07:05:42.509342 | orchestrator | Thursday 09 April 2026 07:05:37 +0000 (0:00:01.110) 0:00:35.212 ******** 2026-04-09 07:05:42.509351 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0 2026-04-09 07:05:42.509362 | orchestrator | 2026-04-09 07:05:42.509371 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-09 07:05:42.509381 | orchestrator | Thursday 09 April 2026 07:05:39 +0000 (0:00:01.531) 0:00:36.744 ******** 2026-04-09 07:05:42.509393 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:05:42.509407 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:05:42.509429 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 07:05:49.369116 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 07:05:49.369268 | orchestrator | 2026-04-09 07:05:49.369282 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-09 07:05:49.369307 | orchestrator | Thursday 09 April 2026 07:05:44 +0000 (0:00:04.911) 0:00:41.656 ******** 2026-04-09 07:05:49.369319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:05:49.369331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:05:49.369342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 07:05:49.369350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 07:05:49.369378 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:05:49.369387 | orchestrator | 2026-04-09 07:05:49.369410 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-09 07:05:49.369418 | orchestrator | Thursday 09 April 2026 07:05:46 +0000 (0:00:01.745) 0:00:43.402 ******** 2026-04-09 07:05:49.369430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:05:49.369439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:05:49.369447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 07:05:49.369455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 07:05:49.369463 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:05:49.369471 | orchestrator | 2026-04-09 07:05:49.369478 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-09 07:05:49.369486 | orchestrator | Thursday 09 April 2026 07:05:47 +0000 (0:00:01.697) 0:00:45.099 ******** 2026-04-09 07:05:49.369500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:06:16.974909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:06:16.975046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 07:06:16.975073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 07:06:16.975085 | orchestrator | 2026-04-09 07:06:16.975097 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-09 07:06:16.975109 | orchestrator | Thursday 09 April 2026 07:05:53 +0000 (0:00:05.441) 0:00:50.541 ******** 2026-04-09 07:06:16.975120 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-09 07:06:16.975131 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:06:16.975141 | orchestrator | 2026-04-09 07:06:16.975152 | orchestrator | TASK [Configure uWSGI for Cinder] ********************************************** 2026-04-09 07:06:16.975162 | orchestrator | Thursday 09 April 2026 07:05:54 +0000 (0:00:01.476) 0:00:52.017 ******** 2026-04-09 07:06:16.975173 | orchestrator | included: service-uwsgi-config for testbed-node-0 2026-04-09 07:06:16.975183 | orchestrator | 2026-04-09 07:06:16.975193 | orchestrator | TASK [service-uwsgi-config : Copying over cinder-api uWSGI config] ************* 2026-04-09 07:06:16.975227 | orchestrator | Thursday 09 April 2026 07:05:56 +0000 (0:00:01.794) 0:00:53.812 ******** 2026-04-09 07:06:16.975238 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:06:16.975248 | orchestrator | 2026-04-09 07:06:16.975258 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-09 07:06:16.975267 | orchestrator | Thursday 09 April 2026 07:05:59 +0000 (0:00:02.758) 0:00:56.570 ******** 2026-04-09 07:06:16.975280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:06:16.975416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:06:16.975437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 07:06:16.975447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 07:06:16.975460 | orchestrator | 2026-04-09 07:06:16.975472 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-09 07:06:16.975484 | orchestrator | Thursday 09 April 2026 07:06:11 +0000 (0:00:12.210) 0:01:08.780 ******** 2026-04-09 07:06:16.975495 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:06:16.975507 | orchestrator | 2026-04-09 07:06:16.975519 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-04-09 07:06:16.975548 | orchestrator | Thursday 09 April 2026 07:06:13 +0000 (0:00:02.301) 0:01:11.081 ******** 2026-04-09 07:06:16.975561 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:06:16.975574 | orchestrator | 2026-04-09 07:06:16.975586 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-09 07:06:16.975633 | orchestrator | Thursday 09 April 2026 07:06:16 +0000 (0:00:02.496) 0:01:13.578 ******** 2026-04-09 07:06:16.975647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:06:16.975670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:06:59.909184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 07:06:59.909299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 07:06:59.909318 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:06:59.909332 | orchestrator | 2026-04-09 07:06:59.909345 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-09 07:06:59.909357 | orchestrator | Thursday 09 April 2026 07:06:18 +0000 (0:00:01.695) 0:01:15.274 ******** 2026-04-09 07:06:59.909368 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:06:59.909403 | orchestrator | 2026-04-09 07:06:59.909415 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-09 07:06:59.909426 | orchestrator | Thursday 09 April 2026 07:06:19 +0000 (0:00:01.544) 0:01:16.818 ******** 2026-04-09 07:06:59.909437 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:06:59.909447 | orchestrator | 2026-04-09 07:06:59.909458 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-09 07:06:59.909469 | orchestrator | Thursday 09 April 2026 07:06:58 +0000 (0:00:38.399) 0:01:55.218 ******** 2026-04-09 07:06:59.909483 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:06:59.909498 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:06:59.909536 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:06:59.909551 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:06:59.909572 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 07:06:59.909585 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:06:59.909597 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:06:59.909621 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:07.450151 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:07.450294 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:07.450353 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:07.450374 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:07.450387 | orchestrator | 2026-04-09 07:07:07.450400 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-09 07:07:07.450412 | orchestrator | Thursday 09 April 2026 07:07:01 +0000 (0:00:03.435) 0:01:58.654 ******** 2026-04-09 07:07:07.450423 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:07:07.450435 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:07:07.450446 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:07:07.450457 | orchestrator | 2026-04-09 07:07:07.450477 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-09 07:07:07.450495 | orchestrator | Thursday 09 April 2026 07:07:02 +0000 (0:00:01.333) 0:01:59.988 ******** 2026-04-09 07:07:07.450514 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 07:07:07.450533 | orchestrator | 2026-04-09 07:07:07.450553 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-09 07:07:07.450573 | orchestrator | Thursday 09 April 2026 07:07:04 +0000 (0:00:01.484) 0:02:01.473 ******** 2026-04-09 07:07:07.450594 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-09 07:07:07.450610 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-09 07:07:07.450623 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-09 07:07:07.450636 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-09 07:07:07.450649 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-09 07:07:07.450696 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-09 07:07:07.450709 | orchestrator | 2026-04-09 07:07:07.450722 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-09 07:07:07.450774 | orchestrator | Thursday 09 April 2026 07:07:06 +0000 (0:00:02.666) 0:02:04.140 ******** 2026-04-09 07:07:07.450794 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-09 07:07:07.450823 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-09 07:07:07.450836 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-09 07:07:07.450849 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-09 07:07:07.450875 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-09 07:07:08.754442 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-09 07:07:08.754547 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-09 07:07:08.754564 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-09 07:07:08.754594 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-09 07:07:08.754650 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-09 07:07:08.754715 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-09 07:07:08.754728 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-09 07:07:08.754746 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-09 07:07:08.754775 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-09 07:07:12.138733 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-09 07:07:12.138815 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-09 07:07:12.138824 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-09 07:07:12.138843 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-09 07:07:12.138878 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-09 07:07:12.138886 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-09 07:07:12.138892 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-09 07:07:12.138898 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-09 07:07:12.138912 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-09 07:07:12.138922 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-09 07:07:28.805995 | orchestrator | 2026-04-09 07:07:28.806157 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-09 07:07:28.806174 | orchestrator | Thursday 09 April 2026 07:07:13 +0000 (0:00:06.285) 0:02:10.425 ******** 2026-04-09 07:07:28.806185 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-09 07:07:28.806198 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-09 07:07:28.806208 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-09 07:07:28.806218 | orchestrator | 2026-04-09 07:07:28.806229 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-09 07:07:28.806239 | orchestrator | Thursday 09 April 2026 07:07:16 +0000 (0:00:02.789) 0:02:13.214 ******** 2026-04-09 07:07:28.806250 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-09 07:07:28.806260 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-09 07:07:28.806270 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-09 07:07:28.806281 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-09 07:07:28.806293 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-09 07:07:28.806303 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-09 07:07:28.806313 | orchestrator | 2026-04-09 07:07:28.806323 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-09 07:07:28.806354 | orchestrator | Thursday 09 April 2026 07:07:19 +0000 (0:00:03.730) 0:02:16.945 ******** 2026-04-09 07:07:28.806365 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-09 07:07:28.806376 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-09 07:07:28.806386 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-09 07:07:28.806396 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-09 07:07:28.806406 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-09 07:07:28.806415 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-09 07:07:28.806425 | orchestrator | 2026-04-09 07:07:28.806435 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-09 07:07:28.806445 | orchestrator | Thursday 09 April 2026 07:07:21 +0000 (0:00:02.106) 0:02:19.051 ******** 2026-04-09 07:07:28.806455 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:07:28.806465 | orchestrator | 2026-04-09 07:07:28.806475 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-09 07:07:28.806485 | orchestrator | Thursday 09 April 2026 07:07:22 +0000 (0:00:01.132) 0:02:20.184 ******** 2026-04-09 07:07:28.806494 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:07:28.806504 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:07:28.806514 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:07:28.806524 | orchestrator | 2026-04-09 07:07:28.806536 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-09 07:07:28.806561 | orchestrator | Thursday 09 April 2026 07:07:24 +0000 (0:00:01.593) 0:02:21.778 ******** 2026-04-09 07:07:28.806574 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 07:07:28.806587 | orchestrator | 2026-04-09 07:07:28.806599 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-09 07:07:28.806611 | orchestrator | Thursday 09 April 2026 07:07:25 +0000 (0:00:01.336) 0:02:23.114 ******** 2026-04-09 07:07:28.806643 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:07:28.806662 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:07:28.806704 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:07:28.806724 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:28.806736 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:28.806747 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:28.806767 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:31.728792 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:31.728925 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:31.728959 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:31.728973 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:31.728985 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:31.728997 | orchestrator | 2026-04-09 07:07:31.729011 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-09 07:07:31.729024 | orchestrator | Thursday 09 April 2026 07:07:30 +0000 (0:00:05.090) 0:02:28.205 ******** 2026-04-09 07:07:31.729059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:07:31.729084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:07:31.729102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 07:07:31.729114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 07:07:31.729126 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:07:31.729140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:07:31.729169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:07:33.502013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 07:07:33.502175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 07:07:33.502192 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:07:33.502226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:07:33.502242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:07:33.502254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 07:07:33.502305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 07:07:33.502319 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:07:33.502330 | orchestrator | 2026-04-09 07:07:33.502343 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-09 07:07:33.502355 | orchestrator | Thursday 09 April 2026 07:07:32 +0000 (0:00:01.948) 0:02:30.154 ******** 2026-04-09 07:07:33.502372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:07:33.502386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:07:33.502398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 07:07:33.502417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 07:07:33.502428 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:07:33.502448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:07:36.553273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:07:36.553402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 07:07:36.553422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 07:07:36.553464 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:07:36.553481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:07:36.553497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:07:36.553528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 07:07:36.553546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 07:07:36.553558 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:07:36.553570 | orchestrator | 2026-04-09 07:07:36.553582 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-09 07:07:36.553594 | orchestrator | Thursday 09 April 2026 07:07:34 +0000 (0:00:01.820) 0:02:31.974 ******** 2026-04-09 07:07:36.553607 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:07:36.553628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:07:36.553649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:07:51.181932 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:51.182126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:51.182172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:51.182187 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:51.182200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:51.182232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:51.182253 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:51.182265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:51.182290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:51.182303 | orchestrator | 2026-04-09 07:07:51.182316 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-09 07:07:51.182330 | orchestrator | Thursday 09 April 2026 07:07:41 +0000 (0:00:06.556) 0:02:38.530 ******** 2026-04-09 07:07:51.182341 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-09 07:07:51.182352 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:07:51.182364 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-09 07:07:51.182375 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:07:51.182386 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-09 07:07:51.182397 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:07:51.182408 | orchestrator | 2026-04-09 07:07:51.182419 | orchestrator | TASK [Configure uWSGI for Cinder] ********************************************** 2026-04-09 07:07:51.182432 | orchestrator | Thursday 09 April 2026 07:07:43 +0000 (0:00:01.748) 0:02:40.279 ******** 2026-04-09 07:07:51.182445 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 07:07:51.182458 | orchestrator | 2026-04-09 07:07:51.182471 | orchestrator | TASK [service-uwsgi-config : Copying over cinder-api uWSGI config] ************* 2026-04-09 07:07:51.182483 | orchestrator | Thursday 09 April 2026 07:07:44 +0000 (0:00:01.870) 0:02:42.149 ******** 2026-04-09 07:07:51.182496 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:07:51.182509 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:07:51.182522 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:07:51.182535 | orchestrator | 2026-04-09 07:07:51.182548 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-09 07:07:51.182561 | orchestrator | Thursday 09 April 2026 07:07:47 +0000 (0:00:02.992) 0:02:45.142 ******** 2026-04-09 07:07:51.182586 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:07:59.781835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:07:59.781916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:07:59.781926 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:59.781933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:59.781938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:59.781970 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:59.781977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:59.781983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:59.781988 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:59.781993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 07:07:59.782005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 07:08:07.079498 | orchestrator | 2026-04-09 07:08:07.079573 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-09 07:08:07.079581 | orchestrator | Thursday 09 April 2026 07:08:00 +0000 (0:00:13.003) 0:02:58.145 ******** 2026-04-09 07:08:07.079586 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:08:07.079592 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:08:07.079597 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:08:07.079601 | orchestrator | 2026-04-09 07:08:07.079607 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-04-09 07:08:07.079612 | orchestrator | Thursday 09 April 2026 07:08:03 +0000 (0:00:02.713) 0:03:00.859 ******** 2026-04-09 07:08:07.079617 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:08:07.079622 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:08:07.079627 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:08:07.079632 | orchestrator | 2026-04-09 07:08:07.079637 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-09 07:08:07.079641 | orchestrator | Thursday 09 April 2026 07:08:06 +0000 (0:00:02.818) 0:03:03.678 ******** 2026-04-09 07:08:07.079648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:08:07.079656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:08:07.079662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 07:08:07.079681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 07:08:07.079686 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:08:07.079713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:08:07.079751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:08:07.079757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 07:08:07.079761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 07:08:07.079770 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:08:07.079779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:08:07.079788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:08:13.096048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 07:08:13.096174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 07:08:13.096202 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:08:13.096242 | orchestrator | 2026-04-09 07:08:13.096265 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-09 07:08:13.096280 | orchestrator | Thursday 09 April 2026 07:08:08 +0000 (0:00:01.750) 0:03:05.429 ******** 2026-04-09 07:08:13.096291 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:08:13.096302 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:08:13.096314 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:08:13.096325 | orchestrator | 2026-04-09 07:08:13.096337 | orchestrator | TASK [service-check-containers : cinder | Check containers] ******************** 2026-04-09 07:08:13.096348 | orchestrator | Thursday 09 April 2026 07:08:09 +0000 (0:00:01.672) 0:03:07.101 ******** 2026-04-09 07:08:13.096392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:08:13.096438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:08:13.096490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:08:13.096513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:08:13.096530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:08:13.096565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:08:13.096586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 07:08:13.096611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 07:08:17.074890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 07:08:17.074993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 07:08:17.075032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 07:08:17.075045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 07:08:17.075057 | orchestrator | 2026-04-09 07:08:17.075070 | orchestrator | TASK [service-check-containers : cinder | Notify handlers to restart containers] *** 2026-04-09 07:08:17.075082 | orchestrator | Thursday 09 April 2026 07:08:15 +0000 (0:00:05.312) 0:03:12.414 ******** 2026-04-09 07:08:17.075095 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 07:08:17.075106 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:08:17.075118 | orchestrator | } 2026-04-09 07:08:17.075144 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 07:08:17.075155 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:08:17.075166 | orchestrator | } 2026-04-09 07:08:17.075177 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 07:08:17.075188 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:08:17.075199 | orchestrator | } 2026-04-09 07:08:17.075244 | orchestrator | 2026-04-09 07:08:17.075257 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 07:08:17.075268 | orchestrator | Thursday 09 April 2026 07:08:16 +0000 (0:00:01.398) 0:03:13.812 ******** 2026-04-09 07:08:17.075301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:08:17.075317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:08:17.075338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 07:08:17.075351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 07:08:17.075362 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:08:17.075381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:08:17.075406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:10:31.239469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 07:10:31.239615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 07:10:31.239634 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:10:31.239652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:10:31.239683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:10:31.239697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 07:10:31.239741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 07:10:31.239773 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:10:31.239792 | orchestrator | 2026-04-09 07:10:31.239812 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-09 07:10:31.239831 | orchestrator | Thursday 09 April 2026 07:08:18 +0000 (0:00:01.683) 0:03:15.495 ******** 2026-04-09 07:10:31.239916 | orchestrator | 2026-04-09 07:10:31.239935 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-09 07:10:31.239953 | orchestrator | Thursday 09 April 2026 07:08:18 +0000 (0:00:00.479) 0:03:15.975 ******** 2026-04-09 07:10:31.239971 | orchestrator | 2026-04-09 07:10:31.239990 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-09 07:10:31.240009 | orchestrator | Thursday 09 April 2026 07:08:19 +0000 (0:00:00.627) 0:03:16.602 ******** 2026-04-09 07:10:31.240030 | orchestrator | 2026-04-09 07:10:31.240053 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-09 07:10:31.240074 | orchestrator | Thursday 09 April 2026 07:08:20 +0000 (0:00:00.815) 0:03:17.417 ******** 2026-04-09 07:10:31.240094 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:10:31.240116 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:10:31.240136 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:10:31.240157 | orchestrator | 2026-04-09 07:10:31.240178 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-09 07:10:31.240200 | orchestrator | Thursday 09 April 2026 07:08:52 +0000 (0:00:32.687) 0:03:50.105 ******** 2026-04-09 07:10:31.240220 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:10:31.240241 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:10:31.240263 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:10:31.240282 | orchestrator | 2026-04-09 07:10:31.240304 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-09 07:10:31.240326 | orchestrator | Thursday 09 April 2026 07:09:06 +0000 (0:00:13.143) 0:04:03.249 ******** 2026-04-09 07:10:31.240346 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:10:31.240366 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:10:31.240384 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:10:31.240402 | orchestrator | 2026-04-09 07:10:31.240422 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-09 07:10:31.240443 | orchestrator | Thursday 09 April 2026 07:09:39 +0000 (0:00:33.905) 0:04:37.154 ******** 2026-04-09 07:10:31.240463 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:10:31.240483 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:10:31.240502 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:10:31.240522 | orchestrator | 2026-04-09 07:10:31.240542 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-09 07:10:31.240562 | orchestrator | Thursday 09 April 2026 07:09:53 +0000 (0:00:13.568) 0:04:50.723 ******** 2026-04-09 07:10:31.240582 | orchestrator | Pausing for 30 seconds 2026-04-09 07:10:31.240603 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:10:31.240622 | orchestrator | 2026-04-09 07:10:31.240641 | orchestrator | TASK [cinder : Reload cinder services to remove RPC version pin] *************** 2026-04-09 07:10:31.240660 | orchestrator | Thursday 09 April 2026 07:10:25 +0000 (0:00:31.554) 0:05:22.278 ******** 2026-04-09 07:10:31.240696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:10:31.240757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:11:09.948714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:11:09.948839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:09.948857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:09.948929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:09.948966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:09.949000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:09.949014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:09.949025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:09.949043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:09.949063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:09.949075 | orchestrator | 2026-04-09 07:11:09.949089 | orchestrator | TASK [cinder : Running Cinder online schema migration] ************************* 2026-04-09 07:11:09.949102 | orchestrator | Thursday 09 April 2026 07:10:54 +0000 (0:00:29.367) 0:05:51.645 ******** 2026-04-09 07:11:09.949114 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:11:09.949126 | orchestrator | 2026-04-09 07:11:09.949138 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 07:11:09.949150 | orchestrator | testbed-node-0 : ok=44  changed=13  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 07:11:09.949163 | orchestrator | testbed-node-1 : ok=25  changed=11  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-09 07:11:09.949174 | orchestrator | testbed-node-2 : ok=25  changed=11  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-09 07:11:09.949185 | orchestrator | 2026-04-09 07:11:09.949196 | orchestrator | 2026-04-09 07:11:09.949208 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 07:11:09.949228 | orchestrator | Thursday 09 April 2026 07:11:09 +0000 (0:00:15.498) 0:06:07.144 ******** 2026-04-09 07:11:10.385600 | orchestrator | =============================================================================== 2026-04-09 07:11:10.385672 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 38.40s 2026-04-09 07:11:10.385677 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 33.91s 2026-04-09 07:11:10.385681 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 32.69s 2026-04-09 07:11:10.385686 | orchestrator | cinder : Wait for cinder services to update service versions ----------- 31.55s 2026-04-09 07:11:10.385690 | orchestrator | cinder : Reload cinder services to remove RPC version pin -------------- 29.37s 2026-04-09 07:11:10.385694 | orchestrator | cinder : Running Cinder online schema migration ------------------------ 15.50s 2026-04-09 07:11:10.385698 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 13.57s 2026-04-09 07:11:10.385702 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 13.14s 2026-04-09 07:11:10.385706 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 13.00s 2026-04-09 07:11:10.385710 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 12.21s 2026-04-09 07:11:10.385713 | orchestrator | cinder : Copying over config.json files for services -------------------- 6.56s 2026-04-09 07:11:10.385717 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 6.33s 2026-04-09 07:11:10.385721 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 6.29s 2026-04-09 07:11:10.385725 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.44s 2026-04-09 07:11:10.385729 | orchestrator | service-check-containers : cinder | Check containers -------------------- 5.31s 2026-04-09 07:11:10.385732 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.09s 2026-04-09 07:11:10.385753 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.91s 2026-04-09 07:11:10.385757 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.73s 2026-04-09 07:11:10.385761 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.56s 2026-04-09 07:11:10.385765 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.44s 2026-04-09 07:11:10.598284 | orchestrator | + osism apply -a upgrade barbican 2026-04-09 07:11:11.947097 | orchestrator | 2026-04-09 07:11:11 | INFO  | Prepare task for execution of barbican. 2026-04-09 07:11:12.028492 | orchestrator | 2026-04-09 07:11:12 | INFO  | Task 6d03700c-e80a-4cac-897b-85bfa7c5bf67 (barbican) was prepared for execution. 2026-04-09 07:11:12.028616 | orchestrator | 2026-04-09 07:11:12 | INFO  | It takes a moment until task 6d03700c-e80a-4cac-897b-85bfa7c5bf67 (barbican) has been started and output is visible here. 2026-04-09 07:11:26.017842 | orchestrator | 2026-04-09 07:11:26.017980 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 07:11:26.017992 | orchestrator | 2026-04-09 07:11:26.018000 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 07:11:26.018007 | orchestrator | Thursday 09 April 2026 07:11:17 +0000 (0:00:01.768) 0:00:01.768 ******** 2026-04-09 07:11:26.018068 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:11:26.018078 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:11:26.018086 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:11:26.018093 | orchestrator | 2026-04-09 07:11:26.018100 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 07:11:26.018108 | orchestrator | Thursday 09 April 2026 07:11:18 +0000 (0:00:01.712) 0:00:03.481 ******** 2026-04-09 07:11:26.018115 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-09 07:11:26.018122 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-09 07:11:26.018129 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-09 07:11:26.018136 | orchestrator | 2026-04-09 07:11:26.018143 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-09 07:11:26.018150 | orchestrator | 2026-04-09 07:11:26.018157 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-09 07:11:26.018164 | orchestrator | Thursday 09 April 2026 07:11:20 +0000 (0:00:01.536) 0:00:05.026 ******** 2026-04-09 07:11:26.018171 | orchestrator | included: /ansible/roles/barbican/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 07:11:26.018180 | orchestrator | 2026-04-09 07:11:26.018187 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-09 07:11:26.018194 | orchestrator | Thursday 09 April 2026 07:11:23 +0000 (0:00:03.512) 0:00:08.539 ******** 2026-04-09 07:11:26.018205 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:11:26.018215 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:11:26.018258 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:11:26.018267 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:26.018276 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:26.018284 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:26.018296 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:26.018305 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:26.018316 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:36.589736 | orchestrator | 2026-04-09 07:11:36.589874 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-09 07:11:36.589996 | orchestrator | Thursday 09 April 2026 07:11:27 +0000 (0:00:03.302) 0:00:11.842 ******** 2026-04-09 07:11:36.590123 | orchestrator | ok: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-09 07:11:36.590152 | orchestrator | ok: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-09 07:11:36.590182 | orchestrator | ok: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-09 07:11:36.590195 | orchestrator | 2026-04-09 07:11:36.590207 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-09 07:11:36.590218 | orchestrator | Thursday 09 April 2026 07:11:29 +0000 (0:00:01.978) 0:00:13.820 ******** 2026-04-09 07:11:36.590230 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:11:36.590242 | orchestrator | 2026-04-09 07:11:36.590253 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-09 07:11:36.590266 | orchestrator | Thursday 09 April 2026 07:11:30 +0000 (0:00:01.154) 0:00:14.974 ******** 2026-04-09 07:11:36.590278 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:11:36.590292 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:11:36.590304 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:11:36.590317 | orchestrator | 2026-04-09 07:11:36.590330 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-09 07:11:36.590343 | orchestrator | Thursday 09 April 2026 07:11:31 +0000 (0:00:01.520) 0:00:16.495 ******** 2026-04-09 07:11:36.590357 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 07:11:36.590370 | orchestrator | 2026-04-09 07:11:36.590383 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-09 07:11:36.590396 | orchestrator | Thursday 09 April 2026 07:11:33 +0000 (0:00:01.756) 0:00:18.251 ******** 2026-04-09 07:11:36.590415 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:11:36.590458 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:11:36.590515 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:11:36.590537 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:36.590556 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:36.590586 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:36.590605 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:36.590623 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:36.590652 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:39.856798 | orchestrator | 2026-04-09 07:11:39.856961 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-09 07:11:39.856981 | orchestrator | Thursday 09 April 2026 07:11:37 +0000 (0:00:04.104) 0:00:22.356 ******** 2026-04-09 07:11:39.856999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:11:39.857041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 07:11:39.857054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:11:39.857067 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:11:39.857080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:11:39.857117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 07:11:39.857131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:11:39.857150 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:11:39.857162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:11:39.857174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 07:11:39.857186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:11:39.857198 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:11:39.857209 | orchestrator | 2026-04-09 07:11:39.857221 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-09 07:11:39.857232 | orchestrator | Thursday 09 April 2026 07:11:39 +0000 (0:00:01.869) 0:00:24.226 ******** 2026-04-09 07:11:39.857256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:11:42.797818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 07:11:42.797982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:11:42.798002 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:11:42.798064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:11:42.798080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 07:11:42.798106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:11:42.798117 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:11:42.798147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:11:42.798182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 07:11:42.798194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:11:42.798204 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:11:42.798215 | orchestrator | 2026-04-09 07:11:42.798226 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-09 07:11:42.798238 | orchestrator | Thursday 09 April 2026 07:11:41 +0000 (0:00:01.690) 0:00:25.916 ******** 2026-04-09 07:11:42.798249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:11:42.798274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:11:55.118865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:11:55.119108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:55.119136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:55.119149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:55.119181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:55.119241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:55.119254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:11:55.119267 | orchestrator | 2026-04-09 07:11:55.119279 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-09 07:11:55.119292 | orchestrator | Thursday 09 April 2026 07:11:45 +0000 (0:00:04.568) 0:00:30.484 ******** 2026-04-09 07:11:55.119303 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:11:55.119316 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:11:55.119327 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:11:55.119338 | orchestrator | 2026-04-09 07:11:55.119349 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-09 07:11:55.119361 | orchestrator | Thursday 09 April 2026 07:11:48 +0000 (0:00:02.623) 0:00:33.108 ******** 2026-04-09 07:11:55.119375 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 07:11:55.119389 | orchestrator | 2026-04-09 07:11:55.119401 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-09 07:11:55.119415 | orchestrator | Thursday 09 April 2026 07:11:50 +0000 (0:00:02.386) 0:00:35.495 ******** 2026-04-09 07:11:55.119429 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:11:55.119442 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:11:55.119455 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:11:55.119469 | orchestrator | 2026-04-09 07:11:55.119482 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-09 07:11:55.119494 | orchestrator | Thursday 09 April 2026 07:11:52 +0000 (0:00:01.640) 0:00:37.136 ******** 2026-04-09 07:11:55.119510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:11:55.119539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:11:55.119565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:12:01.575825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:12:01.576026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:12:01.576060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:12:01.576130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:12:01.576153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:12:01.576172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:12:01.576192 | orchestrator | 2026-04-09 07:12:01.576206 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-09 07:12:01.576237 | orchestrator | Thursday 09 April 2026 07:12:00 +0000 (0:00:08.428) 0:00:45.564 ******** 2026-04-09 07:12:01.576253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:12:01.576268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 07:12:01.576301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:12:01.576320 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:12:01.576348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:12:01.576382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 07:12:05.473401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:12:05.473564 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:12:05.473597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:12:05.473722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 07:12:05.473767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:12:05.473781 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:12:05.473794 | orchestrator | 2026-04-09 07:12:05.473807 | orchestrator | TASK [service-check-containers : barbican | Check containers] ****************** 2026-04-09 07:12:05.473820 | orchestrator | Thursday 09 April 2026 07:12:03 +0000 (0:00:02.310) 0:00:47.875 ******** 2026-04-09 07:12:05.473853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:12:05.473868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:12:05.473890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:12:05.473908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:12:05.473958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:12:05.473984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:12:09.840457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:12:09.840579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:12:09.840595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:12:09.840606 | orchestrator | 2026-04-09 07:12:09.840618 | orchestrator | TASK [service-check-containers : barbican | Notify handlers to restart containers] *** 2026-04-09 07:12:09.840630 | orchestrator | Thursday 09 April 2026 07:12:07 +0000 (0:00:04.322) 0:00:52.198 ******** 2026-04-09 07:12:09.840641 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 07:12:09.840652 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:12:09.840662 | orchestrator | } 2026-04-09 07:12:09.840672 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 07:12:09.840696 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:12:09.840707 | orchestrator | } 2026-04-09 07:12:09.840717 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 07:12:09.840727 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:12:09.840737 | orchestrator | } 2026-04-09 07:12:09.840747 | orchestrator | 2026-04-09 07:12:09.840757 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 07:12:09.840767 | orchestrator | Thursday 09 April 2026 07:12:08 +0000 (0:00:01.423) 0:00:53.622 ******** 2026-04-09 07:12:09.840781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:12:09.840808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 07:12:09.840828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:12:09.840839 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:12:09.840850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:12:09.840866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 07:12:09.840878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:12:09.840888 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:12:09.840906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:15:12.997235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 07:15:12.997363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:15:12.997382 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:15:12.997397 | orchestrator | 2026-04-09 07:15:12.997410 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-09 07:15:12.997423 | orchestrator | Thursday 09 April 2026 07:12:11 +0000 (0:00:02.519) 0:00:56.142 ******** 2026-04-09 07:15:12.997434 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:15:12.997445 | orchestrator | 2026-04-09 07:15:12.997456 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-09 07:15:12.997468 | orchestrator | Thursday 09 April 2026 07:12:25 +0000 (0:00:14.044) 0:01:10.186 ******** 2026-04-09 07:15:12.997479 | orchestrator | 2026-04-09 07:15:12.997490 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-09 07:15:12.997501 | orchestrator | Thursday 09 April 2026 07:12:25 +0000 (0:00:00.434) 0:01:10.621 ******** 2026-04-09 07:15:12.997512 | orchestrator | 2026-04-09 07:15:12.997523 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-09 07:15:12.997551 | orchestrator | Thursday 09 April 2026 07:12:26 +0000 (0:00:00.438) 0:01:11.059 ******** 2026-04-09 07:15:12.997563 | orchestrator | 2026-04-09 07:15:12.997574 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-09 07:15:12.997586 | orchestrator | Thursday 09 April 2026 07:12:27 +0000 (0:00:00.864) 0:01:11.924 ******** 2026-04-09 07:15:12.997598 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:15:12.997609 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:15:12.997620 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:15:12.997631 | orchestrator | 2026-04-09 07:15:12.997642 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-09 07:15:12.997653 | orchestrator | Thursday 09 April 2026 07:14:41 +0000 (0:02:13.882) 0:03:25.807 ******** 2026-04-09 07:15:12.997664 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:15:12.997675 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:15:12.997686 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:15:12.997697 | orchestrator | 2026-04-09 07:15:12.997709 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-09 07:15:12.997720 | orchestrator | Thursday 09 April 2026 07:14:53 +0000 (0:00:12.860) 0:03:38.668 ******** 2026-04-09 07:15:12.997755 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:15:12.997769 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:15:12.997782 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:15:12.997795 | orchestrator | 2026-04-09 07:15:12.997807 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 07:15:12.997822 | orchestrator | testbed-node-0 : ok=17  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 07:15:12.997836 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 07:15:12.997850 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 07:15:12.997863 | orchestrator | 2026-04-09 07:15:12.997876 | orchestrator | 2026-04-09 07:15:12.997887 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 07:15:12.997899 | orchestrator | Thursday 09 April 2026 07:15:12 +0000 (0:00:18.645) 0:03:57.313 ******** 2026-04-09 07:15:12.997909 | orchestrator | =============================================================================== 2026-04-09 07:15:12.997920 | orchestrator | barbican : Restart barbican-api container ----------------------------- 133.88s 2026-04-09 07:15:12.997931 | orchestrator | barbican : Restart barbican-worker container --------------------------- 18.65s 2026-04-09 07:15:12.997942 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 14.04s 2026-04-09 07:15:12.997953 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 12.86s 2026-04-09 07:15:12.997964 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.43s 2026-04-09 07:15:12.997993 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.57s 2026-04-09 07:15:12.998005 | orchestrator | service-check-containers : barbican | Check containers ------------------ 4.32s 2026-04-09 07:15:12.998097 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.10s 2026-04-09 07:15:12.998113 | orchestrator | barbican : include_tasks ------------------------------------------------ 3.51s 2026-04-09 07:15:12.998126 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.30s 2026-04-09 07:15:12.998145 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.62s 2026-04-09 07:15:12.998161 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.52s 2026-04-09 07:15:12.998178 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.39s 2026-04-09 07:15:12.998195 | orchestrator | barbican : Copying over existing policy file ---------------------------- 2.31s 2026-04-09 07:15:12.998213 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.98s 2026-04-09 07:15:12.998231 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.87s 2026-04-09 07:15:12.998252 | orchestrator | barbican : include_tasks ------------------------------------------------ 1.76s 2026-04-09 07:15:12.998270 | orchestrator | barbican : Flush handlers ----------------------------------------------- 1.74s 2026-04-09 07:15:12.998289 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.71s 2026-04-09 07:15:12.998303 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.69s 2026-04-09 07:15:13.192267 | orchestrator | + osism apply -a upgrade designate 2026-04-09 07:15:14.561707 | orchestrator | 2026-04-09 07:15:14 | INFO  | Prepare task for execution of designate. 2026-04-09 07:15:14.633847 | orchestrator | 2026-04-09 07:15:14 | INFO  | Task 1717fa41-22cb-454a-8de3-db10071f5f92 (designate) was prepared for execution. 2026-04-09 07:15:14.634528 | orchestrator | 2026-04-09 07:15:14 | INFO  | It takes a moment until task 1717fa41-22cb-454a-8de3-db10071f5f92 (designate) has been started and output is visible here. 2026-04-09 07:15:24.562921 | orchestrator | 2026-04-09 07:15:24.563122 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 07:15:24.563145 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-09 07:15:24.563159 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-09 07:15:24.563183 | orchestrator | 2026-04-09 07:15:24.563211 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 07:15:24.563223 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-09 07:15:24.563234 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-09 07:15:24.563283 | orchestrator | Thursday 09 April 2026 07:15:19 +0000 (0:00:01.315) 0:00:01.315 ******** 2026-04-09 07:15:24.563295 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:15:24.563307 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:15:24.563318 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:15:24.563329 | orchestrator | 2026-04-09 07:15:24.563341 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 07:15:24.563352 | orchestrator | Thursday 09 April 2026 07:15:20 +0000 (0:00:00.798) 0:00:02.113 ******** 2026-04-09 07:15:24.563363 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-09 07:15:24.563374 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-09 07:15:24.563385 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-09 07:15:24.563403 | orchestrator | 2026-04-09 07:15:24.563422 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-09 07:15:24.563441 | orchestrator | 2026-04-09 07:15:24.563498 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-09 07:15:24.563523 | orchestrator | Thursday 09 April 2026 07:15:20 +0000 (0:00:00.759) 0:00:02.873 ******** 2026-04-09 07:15:24.563543 | orchestrator | included: /ansible/roles/designate/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 07:15:24.563560 | orchestrator | 2026-04-09 07:15:24.563574 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-09 07:15:24.563586 | orchestrator | Thursday 09 April 2026 07:15:22 +0000 (0:00:01.341) 0:00:04.214 ******** 2026-04-09 07:15:24.563602 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:15:24.563623 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:15:24.563692 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:15:24.563709 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 07:15:24.563724 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 07:15:24.563738 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 07:15:24.563753 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:24.563777 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:24.563804 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:28.847918 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:28.848026 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:28.848080 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:28.848095 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:28.848135 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:28.848148 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:28.848194 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:28.848208 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:28.848220 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:28.848233 | orchestrator | 2026-04-09 07:15:28.848246 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-09 07:15:28.848259 | orchestrator | Thursday 09 April 2026 07:15:25 +0000 (0:00:03.558) 0:00:07.773 ******** 2026-04-09 07:15:28.848271 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:15:28.848283 | orchestrator | 2026-04-09 07:15:28.848294 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-09 07:15:28.848306 | orchestrator | Thursday 09 April 2026 07:15:25 +0000 (0:00:00.175) 0:00:07.948 ******** 2026-04-09 07:15:28.848317 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:15:28.848328 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:15:28.848339 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:15:28.848358 | orchestrator | 2026-04-09 07:15:28.848370 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-09 07:15:28.848381 | orchestrator | Thursday 09 April 2026 07:15:26 +0000 (0:00:00.322) 0:00:08.271 ******** 2026-04-09 07:15:28.848392 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 07:15:28.848403 | orchestrator | 2026-04-09 07:15:28.848415 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-09 07:15:28.848426 | orchestrator | Thursday 09 April 2026 07:15:27 +0000 (0:00:01.186) 0:00:09.458 ******** 2026-04-09 07:15:28.848438 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:15:28.848467 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:15:32.110813 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:15:32.110931 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 07:15:32.110971 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 07:15:32.110984 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 07:15:32.110997 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:32.111145 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:32.111177 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:32.111195 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:32.111231 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:32.111251 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:32.111271 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:32.111300 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:32.111346 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:34.112414 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:34.112523 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:34.112534 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:34.112542 | orchestrator | 2026-04-09 07:15:34.112550 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-09 07:15:34.112559 | orchestrator | Thursday 09 April 2026 07:15:33 +0000 (0:00:05.707) 0:00:15.165 ******** 2026-04-09 07:15:34.112570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:15:34.112592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 07:15:34.112614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:15:34.112628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 07:15:34.112635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 07:15:34.112642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 07:15:34.112653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:15:34.112666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 07:15:35.424462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 07:15:35.424587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 07:15:35.424604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:15:35.424618 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:15:35.424670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 07:15:35.424702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 07:15:35.424715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 07:15:35.424776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 07:15:35.424790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:15:35.424802 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:15:35.424814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 07:15:35.424825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:15:35.424837 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:15:35.424848 | orchestrator | 2026-04-09 07:15:35.424861 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-09 07:15:35.424874 | orchestrator | Thursday 09 April 2026 07:15:34 +0000 (0:00:01.523) 0:00:16.689 ******** 2026-04-09 07:15:35.424893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:15:35.424917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 07:15:35.885490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:15:35.885591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 07:15:35.885608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 07:15:35.885622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 07:15:35.885652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:15:35.885706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 07:15:35.885720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 07:15:35.885733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:15:35.885746 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:15:35.885760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 07:15:35.885777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 07:15:35.885796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 07:15:35.885816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 07:15:39.578328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 07:15:39.578444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:15:39.578461 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:15:39.578476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 07:15:39.578489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:15:39.578500 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:15:39.578535 | orchestrator | 2026-04-09 07:15:39.578548 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-09 07:15:39.578575 | orchestrator | Thursday 09 April 2026 07:15:36 +0000 (0:00:01.812) 0:00:18.502 ******** 2026-04-09 07:15:39.578589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:15:39.578624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:15:39.578638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:15:39.578651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 07:15:39.578669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 07:15:39.578689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 07:15:39.578710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:45.573278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:45.573388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:45.573405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:45.573448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:45.573481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:45.573494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:45.573526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:45.573539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:45.573551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:45.573563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:45.573587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:45.573600 | orchestrator | 2026-04-09 07:15:45.573614 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-09 07:15:45.573627 | orchestrator | Thursday 09 April 2026 07:15:42 +0000 (0:00:06.141) 0:00:24.643 ******** 2026-04-09 07:15:45.573641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:15:45.573666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:15:54.947680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:15:54.947836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 07:15:54.947856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 07:15:54.947868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 07:15:54.947880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:54.947911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:54.947923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:54.947944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:54.947961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:54.947973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:54.947985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:54.947997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 07:15:54.948021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 07:16:05.566288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:16:05.566435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:16:05.566464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:16:05.566479 | orchestrator | 2026-04-09 07:16:05.566493 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-09 07:16:05.566514 | orchestrator | Thursday 09 April 2026 07:15:57 +0000 (0:00:15.039) 0:00:39.683 ******** 2026-04-09 07:16:05.566533 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-09 07:16:05.566553 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-09 07:16:05.566601 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-09 07:16:05.566619 | orchestrator | 2026-04-09 07:16:05.566636 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-09 07:16:05.566654 | orchestrator | Thursday 09 April 2026 07:16:01 +0000 (0:00:03.884) 0:00:43.568 ******** 2026-04-09 07:16:05.566671 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-09 07:16:05.566691 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-09 07:16:05.566708 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-09 07:16:05.566724 | orchestrator | 2026-04-09 07:16:05.566760 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-09 07:16:05.566780 | orchestrator | Thursday 09 April 2026 07:16:04 +0000 (0:00:02.620) 0:00:46.189 ******** 2026-04-09 07:16:05.566818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:16:05.566895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:16:05.566930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:16:05.566952 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 07:16:05.566973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 07:16:05.566992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 07:16:05.567034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 07:16:07.638692 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 07:16:07.638846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 07:16:07.638879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 07:16:07.638903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 07:16:07.638925 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 07:16:07.638976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 07:16:07.639026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 07:16:07.639116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 07:16:07.639143 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:16:07.639165 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:16:07.639187 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:16:07.639226 | orchestrator | 2026-04-09 07:16:07.639251 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-09 07:16:07.639274 | orchestrator | Thursday 09 April 2026 07:16:06 +0000 (0:00:02.874) 0:00:49.063 ******** 2026-04-09 07:16:07.639308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:16:08.785114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:16:08.785224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:16:08.785242 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 07:16:08.785276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 07:16:08.785290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 07:16:08.785321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 07:16:08.785340 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 07:16:08.785352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 07:16:08.785364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 07:16:08.785383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 07:16:08.785395 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 07:16:08.785414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 07:16:10.650949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 07:16:10.651087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 07:16:10.651106 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:16:10.651140 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:16:10.651153 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:16:10.651165 | orchestrator | 2026-04-09 07:16:10.651213 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-09 07:16:10.651226 | orchestrator | Thursday 09 April 2026 07:16:09 +0000 (0:00:02.735) 0:00:51.799 ******** 2026-04-09 07:16:10.651237 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:16:10.651250 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:16:10.651261 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:16:10.651272 | orchestrator | 2026-04-09 07:16:10.651283 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-09 07:16:10.651294 | orchestrator | Thursday 09 April 2026 07:16:10 +0000 (0:00:00.319) 0:00:52.119 ******** 2026-04-09 07:16:10.651334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:16:10.651351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 07:16:10.651364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 07:16:10.651384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 07:16:10.651396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 07:16:10.651408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:16:10.651420 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:16:10.651444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:16:12.642624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 07:16:12.642760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 07:16:12.642779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 07:16:12.642792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 07:16:12.642804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:16:12.642816 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:16:12.642863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:16:12.642879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 07:16:12.642899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 07:16:12.642912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 07:16:12.642923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 07:16:12.642935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:16:12.642946 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:16:12.642958 | orchestrator | 2026-04-09 07:16:12.642970 | orchestrator | TASK [service-check-containers : designate | Check containers] ***************** 2026-04-09 07:16:12.642982 | orchestrator | Thursday 09 April 2026 07:16:11 +0000 (0:00:01.178) 0:00:53.297 ******** 2026-04-09 07:16:12.643007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:16:16.124673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:16:16.124815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:16:16.124848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 07:16:16.124888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 07:16:16.124902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 07:16:16.124956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 07:16:16.124971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 07:16:16.124983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 07:16:16.124995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 07:16:16.125007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 07:16:16.125023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 07:16:16.125095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 07:16:18.320179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 07:16:18.320287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 07:16:18.320304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:16:18.320317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:16:18.320347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:16:18.320385 | orchestrator | 2026-04-09 07:16:18.320400 | orchestrator | TASK [service-check-containers : designate | Notify handlers to restart containers] *** 2026-04-09 07:16:18.320412 | orchestrator | Thursday 09 April 2026 07:16:17 +0000 (0:00:05.923) 0:00:59.220 ******** 2026-04-09 07:16:18.320425 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 07:16:18.320437 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:16:18.320448 | orchestrator | } 2026-04-09 07:16:18.320459 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 07:16:18.320470 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:16:18.320481 | orchestrator | } 2026-04-09 07:16:18.320492 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 07:16:18.320503 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:16:18.320514 | orchestrator | } 2026-04-09 07:16:18.320525 | orchestrator | 2026-04-09 07:16:18.320536 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 07:16:18.320547 | orchestrator | Thursday 09 April 2026 07:16:17 +0000 (0:00:00.604) 0:00:59.825 ******** 2026-04-09 07:16:18.320579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:16:18.320595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 07:16:18.320609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 07:16:18.320623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 07:16:18.320650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 07:16:18.320664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:16:18.320677 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:16:18.320699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:16:35.654008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 07:16:35.654203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 07:16:35.654249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 07:16:35.654281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 07:16:35.654298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:16:35.654314 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:16:35.654352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:16:35.654372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 07:16:35.654387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 07:16:35.654413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 07:16:35.654434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 07:16:35.654449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:16:35.654464 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:16:35.654479 | orchestrator | 2026-04-09 07:16:35.654494 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-09 07:16:35.654510 | orchestrator | Thursday 09 April 2026 07:16:19 +0000 (0:00:01.464) 0:01:01.290 ******** 2026-04-09 07:16:35.654525 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:16:35.654539 | orchestrator | 2026-04-09 07:16:35.654553 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-09 07:16:35.654568 | orchestrator | Thursday 09 April 2026 07:16:35 +0000 (0:00:16.087) 0:01:17.377 ******** 2026-04-09 07:16:35.654583 | orchestrator | 2026-04-09 07:16:35.654598 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-09 07:16:35.654612 | orchestrator | Thursday 09 April 2026 07:16:35 +0000 (0:00:00.092) 0:01:17.470 ******** 2026-04-09 07:16:35.654626 | orchestrator | 2026-04-09 07:16:35.654640 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-09 07:16:35.654661 | orchestrator | Thursday 09 April 2026 07:16:35 +0000 (0:00:00.269) 0:01:17.739 ******** 2026-04-09 07:18:51.677055 | orchestrator | 2026-04-09 07:18:51.677233 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-09 07:18:51.677251 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-09 07:18:51.677264 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-09 07:18:51.677288 | orchestrator | Thursday 09 April 2026 07:16:35 +0000 (0:00:00.075) 0:01:17.815 ******** 2026-04-09 07:18:51.677331 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:18:51.677344 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:18:51.677355 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:18:51.677366 | orchestrator | 2026-04-09 07:18:51.677378 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-09 07:18:51.677390 | orchestrator | Thursday 09 April 2026 07:16:49 +0000 (0:00:13.968) 0:01:31.784 ******** 2026-04-09 07:18:51.677401 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:18:51.677413 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:18:51.677424 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:18:51.677434 | orchestrator | 2026-04-09 07:18:51.677446 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-09 07:18:51.677457 | orchestrator | Thursday 09 April 2026 07:17:02 +0000 (0:00:12.554) 0:01:44.338 ******** 2026-04-09 07:18:51.677467 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:18:51.677478 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:18:51.677489 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:18:51.677500 | orchestrator | 2026-04-09 07:18:51.677511 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-09 07:18:51.677522 | orchestrator | Thursday 09 April 2026 07:17:14 +0000 (0:00:12.394) 0:01:56.732 ******** 2026-04-09 07:18:51.677534 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:18:51.677547 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:18:51.677559 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:18:51.677572 | orchestrator | 2026-04-09 07:18:51.677585 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-09 07:18:51.677597 | orchestrator | Thursday 09 April 2026 07:18:17 +0000 (0:01:02.766) 0:02:59.499 ******** 2026-04-09 07:18:51.677609 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:18:51.677623 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:18:51.677635 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:18:51.677648 | orchestrator | 2026-04-09 07:18:51.677660 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-09 07:18:51.677673 | orchestrator | Thursday 09 April 2026 07:18:29 +0000 (0:00:12.469) 0:03:11.969 ******** 2026-04-09 07:18:51.677686 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:18:51.677698 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:18:51.677711 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:18:51.677723 | orchestrator | 2026-04-09 07:18:51.677736 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-09 07:18:51.677749 | orchestrator | Thursday 09 April 2026 07:18:42 +0000 (0:00:13.010) 0:03:24.980 ******** 2026-04-09 07:18:51.677762 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:18:51.677774 | orchestrator | 2026-04-09 07:18:51.677787 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 07:18:51.677819 | orchestrator | testbed-node-0 : ok=22  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 07:18:51.677834 | orchestrator | testbed-node-1 : ok=20  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 07:18:51.677847 | orchestrator | testbed-node-2 : ok=20  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 07:18:51.677860 | orchestrator | 2026-04-09 07:18:51.677872 | orchestrator | 2026-04-09 07:18:51.677885 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 07:18:51.677898 | orchestrator | Thursday 09 April 2026 07:18:51 +0000 (0:00:08.453) 0:03:33.433 ******** 2026-04-09 07:18:51.677911 | orchestrator | =============================================================================== 2026-04-09 07:18:51.677922 | orchestrator | designate : Restart designate-producer container ----------------------- 62.77s 2026-04-09 07:18:51.677933 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.09s 2026-04-09 07:18:51.677952 | orchestrator | designate : Copying over designate.conf -------------------------------- 15.04s 2026-04-09 07:18:51.677964 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.97s 2026-04-09 07:18:51.677974 | orchestrator | designate : Restart designate-worker container ------------------------- 13.01s 2026-04-09 07:18:51.677985 | orchestrator | designate : Restart designate-api container ---------------------------- 12.55s 2026-04-09 07:18:51.677996 | orchestrator | designate : Restart designate-mdns container --------------------------- 12.47s 2026-04-09 07:18:51.678007 | orchestrator | designate : Restart designate-central container ------------------------ 12.39s 2026-04-09 07:18:51.678109 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.45s 2026-04-09 07:18:51.678125 | orchestrator | designate : Copying over config.json files for services ----------------- 6.14s 2026-04-09 07:18:51.678136 | orchestrator | service-check-containers : designate | Check containers ----------------- 5.92s 2026-04-09 07:18:51.678159 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.71s 2026-04-09 07:18:51.678171 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.88s 2026-04-09 07:18:51.678182 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.56s 2026-04-09 07:18:51.678215 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 2.87s 2026-04-09 07:18:51.678227 | orchestrator | designate : Copying over rndc.key --------------------------------------- 2.74s 2026-04-09 07:18:51.678238 | orchestrator | designate : Copying over named.conf ------------------------------------- 2.62s 2026-04-09 07:18:51.678249 | orchestrator | service-cert-copy : designate | Copying over backend internal TLS key --- 1.81s 2026-04-09 07:18:51.678260 | orchestrator | service-cert-copy : designate | Copying over backend internal TLS certificate --- 1.52s 2026-04-09 07:18:51.678271 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.46s 2026-04-09 07:18:51.852784 | orchestrator | + osism apply -a upgrade ceilometer 2026-04-09 07:18:53.190272 | orchestrator | 2026-04-09 07:18:53 | INFO  | Prepare task for execution of ceilometer. 2026-04-09 07:18:53.257160 | orchestrator | 2026-04-09 07:18:53 | INFO  | Task 3392e3d2-718d-4591-87d5-afa9606462ec (ceilometer) was prepared for execution. 2026-04-09 07:18:53.257268 | orchestrator | 2026-04-09 07:18:53 | INFO  | It takes a moment until task 3392e3d2-718d-4591-87d5-afa9606462ec (ceilometer) has been started and output is visible here. 2026-04-09 07:19:13.205165 | orchestrator | 2026-04-09 07:19:13.205290 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 07:19:13.205309 | orchestrator | 2026-04-09 07:19:13.205322 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 07:19:13.205333 | orchestrator | Thursday 09 April 2026 07:18:58 +0000 (0:00:01.472) 0:00:01.472 ******** 2026-04-09 07:19:13.205344 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:19:13.205356 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:19:13.205368 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:19:13.205379 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:19:13.205390 | orchestrator | ok: [testbed-node-4] 2026-04-09 07:19:13.205401 | orchestrator | ok: [testbed-node-5] 2026-04-09 07:19:13.205412 | orchestrator | 2026-04-09 07:19:13.205423 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 07:19:13.205434 | orchestrator | Thursday 09 April 2026 07:19:00 +0000 (0:00:02.822) 0:00:04.294 ******** 2026-04-09 07:19:13.205446 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-04-09 07:19:13.205457 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-04-09 07:19:13.205468 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-04-09 07:19:13.205479 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-04-09 07:19:13.205490 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-04-09 07:19:13.205501 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-04-09 07:19:13.205536 | orchestrator | 2026-04-09 07:19:13.205547 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-04-09 07:19:13.205558 | orchestrator | 2026-04-09 07:19:13.205569 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-09 07:19:13.205580 | orchestrator | Thursday 09 April 2026 07:19:03 +0000 (0:00:02.180) 0:00:06.475 ******** 2026-04-09 07:19:13.205618 | orchestrator | included: /ansible/roles/ceilometer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 07:19:13.205631 | orchestrator | 2026-04-09 07:19:13.205642 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-04-09 07:19:13.205653 | orchestrator | Thursday 09 April 2026 07:19:05 +0000 (0:00:02.711) 0:00:09.187 ******** 2026-04-09 07:19:13.205668 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 07:19:13.205684 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 07:19:13.205696 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 07:19:13.205726 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:19:13.205740 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:19:13.205766 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:19:13.205778 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:19:13.205790 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:19:13.205802 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:19:13.205814 | orchestrator | 2026-04-09 07:19:13.205825 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-04-09 07:19:13.205836 | orchestrator | Thursday 09 April 2026 07:19:10 +0000 (0:00:04.242) 0:00:13.430 ******** 2026-04-09 07:19:13.205848 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 07:19:13.205859 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 07:19:13.205870 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 07:19:13.205881 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 07:19:13.205892 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 07:19:13.205903 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 07:19:13.205914 | orchestrator | 2026-04-09 07:19:13.205925 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-04-09 07:19:13.205938 | orchestrator | Thursday 09 April 2026 07:19:12 +0000 (0:00:02.968) 0:00:16.398 ******** 2026-04-09 07:19:13.205949 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:19:13.205968 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:19:21.379889 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:19:21.380005 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:19:21.380021 | orchestrator | ok: [testbed-node-4] 2026-04-09 07:19:21.380034 | orchestrator | ok: [testbed-node-5] 2026-04-09 07:19:21.380046 | orchestrator | 2026-04-09 07:19:21.380059 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-04-09 07:19:21.380072 | orchestrator | Thursday 09 April 2026 07:19:14 +0000 (0:00:01.882) 0:00:18.281 ******** 2026-04-09 07:19:21.380084 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:19:21.380144 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:19:21.380157 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:19:21.380168 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:19:21.380179 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:19:21.380190 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:19:21.380201 | orchestrator | 2026-04-09 07:19:21.380212 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-04-09 07:19:21.380225 | orchestrator | Thursday 09 April 2026 07:19:16 +0000 (0:00:02.115) 0:00:20.397 ******** 2026-04-09 07:19:21.380236 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:19:21.380247 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:19:21.380258 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:19:21.380269 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:19:21.380280 | orchestrator | ok: [testbed-node-4] 2026-04-09 07:19:21.380291 | orchestrator | ok: [testbed-node-5] 2026-04-09 07:19:21.380302 | orchestrator | 2026-04-09 07:19:21.380312 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-04-09 07:19:21.380324 | orchestrator | Thursday 09 April 2026 07:19:18 +0000 (0:00:01.862) 0:00:22.259 ******** 2026-04-09 07:19:21.380356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:19:21.380373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:21.380385 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:19:21.380398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:19:21.380410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:21.380467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:19:21.380482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:21.380495 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:19:21.380514 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:21.380528 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:19:21.380542 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:19:21.380555 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:21.380568 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:19:21.380581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:21.380603 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:19:21.380616 | orchestrator | 2026-04-09 07:19:21.380629 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-04-09 07:19:21.380642 | orchestrator | Thursday 09 April 2026 07:19:21 +0000 (0:00:02.285) 0:00:24.544 ******** 2026-04-09 07:19:21.380656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:19:21.380678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:36.455004 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:19:36.455145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:19:36.455183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:36.455196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:19:36.455208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:36.455242 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:36.455254 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:19:36.455265 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:19:36.455272 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:19:36.455292 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:36.455299 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:19:36.455309 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:36.455316 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:19:36.455322 | orchestrator | 2026-04-09 07:19:36.455330 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-04-09 07:19:36.455337 | orchestrator | Thursday 09 April 2026 07:19:23 +0000 (0:00:02.576) 0:00:27.121 ******** 2026-04-09 07:19:36.455344 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 07:19:36.455350 | orchestrator | 2026-04-09 07:19:36.455356 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-04-09 07:19:36.455363 | orchestrator | Thursday 09 April 2026 07:19:25 +0000 (0:00:01.771) 0:00:28.893 ******** 2026-04-09 07:19:36.455369 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:19:36.455376 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:19:36.455381 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:19:36.455387 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:19:36.455393 | orchestrator | ok: [testbed-node-4] 2026-04-09 07:19:36.455404 | orchestrator | ok: [testbed-node-5] 2026-04-09 07:19:36.455410 | orchestrator | 2026-04-09 07:19:36.455416 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-04-09 07:19:36.455422 | orchestrator | Thursday 09 April 2026 07:19:27 +0000 (0:00:01.810) 0:00:30.703 ******** 2026-04-09 07:19:36.455427 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:19:36.455433 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:19:36.455439 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:19:36.455445 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:19:36.455450 | orchestrator | ok: [testbed-node-4] 2026-04-09 07:19:36.455456 | orchestrator | ok: [testbed-node-5] 2026-04-09 07:19:36.455462 | orchestrator | 2026-04-09 07:19:36.455468 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-04-09 07:19:36.455474 | orchestrator | Thursday 09 April 2026 07:19:29 +0000 (0:00:02.259) 0:00:32.963 ******** 2026-04-09 07:19:36.455480 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:19:36.455486 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:19:36.455492 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:19:36.455498 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:19:36.455504 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:19:36.455510 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:19:36.455516 | orchestrator | 2026-04-09 07:19:36.455522 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-04-09 07:19:36.455528 | orchestrator | Thursday 09 April 2026 07:19:31 +0000 (0:00:01.736) 0:00:34.700 ******** 2026-04-09 07:19:36.455533 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:19:36.455539 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:19:36.455545 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:19:36.455551 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:19:36.455557 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:19:36.455562 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:19:36.455568 | orchestrator | 2026-04-09 07:19:36.455574 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-04-09 07:19:36.455580 | orchestrator | Thursday 09 April 2026 07:19:33 +0000 (0:00:02.071) 0:00:36.772 ******** 2026-04-09 07:19:36.455586 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 07:19:36.455592 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 07:19:36.455597 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 07:19:36.455603 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 07:19:36.455609 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 07:19:36.455615 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 07:19:36.455621 | orchestrator | 2026-04-09 07:19:36.455626 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-04-09 07:19:36.455632 | orchestrator | Thursday 09 April 2026 07:19:36 +0000 (0:00:02.845) 0:00:39.618 ******** 2026-04-09 07:19:36.455639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:19:36.455651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:43.207940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:19:43.208052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:43.208070 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:19:43.208085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:19:43.208163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:43.208178 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:19:43.208190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:43.208202 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:19:43.208214 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:19:43.208242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:43.208277 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:19:43.208296 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:43.208308 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:19:43.208320 | orchestrator | 2026-04-09 07:19:43.208332 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-04-09 07:19:43.208345 | orchestrator | Thursday 09 April 2026 07:19:38 +0000 (0:00:02.132) 0:00:41.751 ******** 2026-04-09 07:19:43.208356 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:19:43.208367 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:19:43.208378 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:19:43.208390 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:19:43.208401 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:19:43.208411 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:19:43.208422 | orchestrator | 2026-04-09 07:19:43.208434 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-04-09 07:19:43.208446 | orchestrator | Thursday 09 April 2026 07:19:40 +0000 (0:00:01.813) 0:00:43.564 ******** 2026-04-09 07:19:43.208459 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 07:19:43.208472 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 07:19:43.208485 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 07:19:43.208498 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 07:19:43.208510 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 07:19:43.208522 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 07:19:43.208535 | orchestrator | 2026-04-09 07:19:43.208548 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-04-09 07:19:43.208561 | orchestrator | Thursday 09 April 2026 07:19:42 +0000 (0:00:02.715) 0:00:46.279 ******** 2026-04-09 07:19:43.208575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:19:43.208590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:43.208611 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:19:43.208624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:19:43.208652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:53.784463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:19:53.784550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:53.784558 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:53.784565 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:19:53.784571 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:19:53.784576 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:19:53.784581 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:53.784607 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:19:53.784615 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:53.784623 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:19:53.784631 | orchestrator | 2026-04-09 07:19:53.784640 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-04-09 07:19:53.784649 | orchestrator | Thursday 09 April 2026 07:19:45 +0000 (0:00:02.171) 0:00:48.451 ******** 2026-04-09 07:19:53.784657 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:19:53.784664 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:19:53.784672 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:19:53.784680 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:19:53.784688 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:19:53.784709 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:19:53.784717 | orchestrator | 2026-04-09 07:19:53.784722 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-04-09 07:19:53.784738 | orchestrator | Thursday 09 April 2026 07:19:46 +0000 (0:00:01.744) 0:00:50.196 ******** 2026-04-09 07:19:53.784742 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:19:53.784747 | orchestrator | 2026-04-09 07:19:53.784752 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-04-09 07:19:53.784756 | orchestrator | Thursday 09 April 2026 07:19:47 +0000 (0:00:01.129) 0:00:51.326 ******** 2026-04-09 07:19:53.784761 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:19:53.784766 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:19:53.784770 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:19:53.784775 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:19:53.784779 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:19:53.784784 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:19:53.784788 | orchestrator | 2026-04-09 07:19:53.784793 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-09 07:19:53.784798 | orchestrator | Thursday 09 April 2026 07:19:49 +0000 (0:00:01.888) 0:00:53.214 ******** 2026-04-09 07:19:53.784803 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 07:19:53.784809 | orchestrator | 2026-04-09 07:19:53.784814 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-04-09 07:19:53.784819 | orchestrator | Thursday 09 April 2026 07:19:52 +0000 (0:00:02.509) 0:00:55.724 ******** 2026-04-09 07:19:53.784824 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 07:19:53.784836 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 07:19:53.784841 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:19:53.784847 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 07:19:53.784860 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:19:56.565754 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:19:56.565864 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:19:56.565904 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:19:56.565917 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:19:56.565930 | orchestrator | 2026-04-09 07:19:56.565943 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-04-09 07:19:56.565956 | orchestrator | Thursday 09 April 2026 07:19:55 +0000 (0:00:03.350) 0:00:59.074 ******** 2026-04-09 07:19:56.565969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:19:56.565997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:56.566009 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:19:56.566160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:19:56.566184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:19:56.566197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:56.566208 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:19:56.566220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:56.566231 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:19:56.566243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:19:56.566255 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:19:56.566279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:20:02.102466 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:20:02.102576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:20:02.102618 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:20:02.102632 | orchestrator | 2026-04-09 07:20:02.102644 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-04-09 07:20:02.102767 | orchestrator | Thursday 09 April 2026 07:19:57 +0000 (0:00:02.211) 0:01:01.286 ******** 2026-04-09 07:20:02.102789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:20:02.102811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:20:02.102831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:20:02.102850 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:20:02.102869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:20:02.102933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:20:02.102974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:20:02.102996 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:20:02.103016 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:20:02.103031 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:20:02.103044 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:20:02.103058 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:20:02.103071 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:20:02.103084 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:20:02.103121 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:20:02.103135 | orchestrator | 2026-04-09 07:20:02.103149 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-04-09 07:20:02.103161 | orchestrator | Thursday 09 April 2026 07:20:00 +0000 (0:00:02.790) 0:01:04.076 ******** 2026-04-09 07:20:02.103176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 07:20:02.103207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 07:20:08.143648 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:20:08.143731 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:20:08.143744 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:20:08.143754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:20:08.143768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 07:20:08.143797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:20:08.143826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:20:08.143839 | orchestrator | 2026-04-09 07:20:08.143852 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-04-09 07:20:08.143864 | orchestrator | Thursday 09 April 2026 07:20:05 +0000 (0:00:04.344) 0:01:08.421 ******** 2026-04-09 07:20:08.143876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 07:20:08.143888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 07:20:08.143900 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:20:08.143923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 07:20:08.143942 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:20:25.615677 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:20:25.615824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:20:25.615837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:20:25.615846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:20:25.615881 | orchestrator | 2026-04-09 07:20:25.615890 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-04-09 07:20:25.615899 | orchestrator | Thursday 09 April 2026 07:20:11 +0000 (0:00:06.159) 0:01:14.581 ******** 2026-04-09 07:20:25.615905 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 07:20:25.615913 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 07:20:25.615920 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 07:20:25.615926 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 07:20:25.615932 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 07:20:25.615938 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 07:20:25.615944 | orchestrator | 2026-04-09 07:20:25.615968 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-04-09 07:20:25.615974 | orchestrator | Thursday 09 April 2026 07:20:13 +0000 (0:00:02.734) 0:01:17.316 ******** 2026-04-09 07:20:25.615981 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:20:25.615987 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:20:25.615993 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:20:25.615999 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:20:25.616005 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:20:25.616011 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:20:25.616017 | orchestrator | 2026-04-09 07:20:25.616023 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-04-09 07:20:25.616031 | orchestrator | Thursday 09 April 2026 07:20:15 +0000 (0:00:01.969) 0:01:19.285 ******** 2026-04-09 07:20:25.616037 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:20:25.616043 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:20:25.616050 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:20:25.616056 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:20:25.616064 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:20:25.616070 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:20:25.616077 | orchestrator | 2026-04-09 07:20:25.616083 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-04-09 07:20:25.616089 | orchestrator | Thursday 09 April 2026 07:20:18 +0000 (0:00:02.610) 0:01:21.896 ******** 2026-04-09 07:20:25.616095 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:20:25.616101 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:20:25.616136 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:20:25.616143 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:20:25.616168 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:20:25.616174 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:20:25.616181 | orchestrator | 2026-04-09 07:20:25.616187 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-04-09 07:20:25.616193 | orchestrator | Thursday 09 April 2026 07:20:20 +0000 (0:00:02.323) 0:01:24.220 ******** 2026-04-09 07:20:25.616200 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 07:20:25.616206 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 07:20:25.616212 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 07:20:25.616219 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 07:20:25.616226 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 07:20:25.616233 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 07:20:25.616240 | orchestrator | 2026-04-09 07:20:25.616247 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-04-09 07:20:25.616253 | orchestrator | Thursday 09 April 2026 07:20:23 +0000 (0:00:02.957) 0:01:27.177 ******** 2026-04-09 07:20:25.616262 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 07:20:25.616278 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 07:20:25.616285 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 07:20:25.616297 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:20:25.616305 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:20:25.616317 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:20:28.034994 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:20:28.035246 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:20:28.035277 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:20:28.035299 | orchestrator | 2026-04-09 07:20:28.035320 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-04-09 07:20:28.035340 | orchestrator | Thursday 09 April 2026 07:20:27 +0000 (0:00:03.266) 0:01:30.444 ******** 2026-04-09 07:20:28.035383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:20:28.035406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:20:28.035424 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:20:28.035445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:20:28.035504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:20:28.035526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:20:28.035547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:20:28.035566 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:20:28.035594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:20:28.035615 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:20:28.035634 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:20:28.035652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:20:28.035669 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:20:28.035698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:20:35.043287 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:20:35.043383 | orchestrator | 2026-04-09 07:20:35.043396 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-04-09 07:20:35.043406 | orchestrator | Thursday 09 April 2026 07:20:29 +0000 (0:00:02.182) 0:01:32.627 ******** 2026-04-09 07:20:35.043414 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:20:35.043423 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:20:35.043431 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:20:35.043440 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:20:35.043448 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:20:35.043456 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:20:35.043464 | orchestrator | 2026-04-09 07:20:35.043472 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-04-09 07:20:35.043481 | orchestrator | Thursday 09 April 2026 07:20:31 +0000 (0:00:02.047) 0:01:34.674 ******** 2026-04-09 07:20:35.043491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:20:35.043503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:20:35.043512 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:20:35.043534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:20:35.043543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:20:35.043570 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:20:35.043579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:20:35.043602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:20:35.043611 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:20:35.043620 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:20:35.043629 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:20:35.043638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:20:35.043646 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:20:35.043658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:20:35.043668 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:20:35.043676 | orchestrator | 2026-04-09 07:20:35.043684 | orchestrator | TASK [service-check-containers : ceilometer | Check containers] **************** 2026-04-09 07:20:35.043698 | orchestrator | Thursday 09 April 2026 07:20:33 +0000 (0:00:02.547) 0:01:37.222 ******** 2026-04-09 07:20:35.043708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 07:20:35.043723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 07:20:39.403394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-09 07:20:39.403506 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:20:39.403523 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:20:39.403553 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:20:39.403587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:20:39.403599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:20:39.403630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-09 07:20:39.403643 | orchestrator | 2026-04-09 07:20:39.403656 | orchestrator | TASK [service-check-containers : ceilometer | Notify handlers to restart containers] *** 2026-04-09 07:20:39.403669 | orchestrator | Thursday 09 April 2026 07:20:36 +0000 (0:00:03.188) 0:01:40.410 ******** 2026-04-09 07:20:39.403681 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 07:20:39.403693 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:20:39.403704 | orchestrator | } 2026-04-09 07:20:39.403716 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 07:20:39.403727 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:20:39.403738 | orchestrator | } 2026-04-09 07:20:39.403749 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 07:20:39.403760 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:20:39.403771 | orchestrator | } 2026-04-09 07:20:39.403789 | orchestrator | changed: [testbed-node-3] => { 2026-04-09 07:20:39.403806 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:20:39.403824 | orchestrator | } 2026-04-09 07:20:39.403842 | orchestrator | changed: [testbed-node-4] => { 2026-04-09 07:20:39.403858 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:20:39.403875 | orchestrator | } 2026-04-09 07:20:39.403892 | orchestrator | changed: [testbed-node-5] => { 2026-04-09 07:20:39.403908 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:20:39.403924 | orchestrator | } 2026-04-09 07:20:39.403940 | orchestrator | 2026-04-09 07:20:39.403959 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 07:20:39.403978 | orchestrator | Thursday 09 April 2026 07:20:38 +0000 (0:00:01.785) 0:01:42.196 ******** 2026-04-09 07:20:39.403999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:20:39.404042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:20:39.404064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:20:39.404078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:20:39.404102 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:21:33.686594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-09 07:21:33.686716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:21:33.686735 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:21:33.686750 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:21:33.686784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:21:33.686798 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:21:33.686824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:21:33.686837 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:21:33.686848 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 07:21:33.686860 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:21:33.686872 | orchestrator | 2026-04-09 07:21:33.686884 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-04-09 07:21:33.686896 | orchestrator | Thursday 09 April 2026 07:20:41 +0000 (0:00:02.816) 0:01:45.013 ******** 2026-04-09 07:21:33.686908 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:21:33.686919 | orchestrator | 2026-04-09 07:21:33.686930 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-09 07:21:33.686941 | orchestrator | Thursday 09 April 2026 07:20:51 +0000 (0:00:09.897) 0:01:54.910 ******** 2026-04-09 07:21:33.686953 | orchestrator | 2026-04-09 07:21:33.686965 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-09 07:21:33.686992 | orchestrator | Thursday 09 April 2026 07:20:52 +0000 (0:00:00.656) 0:01:55.567 ******** 2026-04-09 07:21:33.687004 | orchestrator | 2026-04-09 07:21:33.687015 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-09 07:21:33.687027 | orchestrator | Thursday 09 April 2026 07:20:52 +0000 (0:00:00.437) 0:01:56.004 ******** 2026-04-09 07:21:33.687038 | orchestrator | 2026-04-09 07:21:33.687049 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-09 07:21:33.687060 | orchestrator | Thursday 09 April 2026 07:20:53 +0000 (0:00:00.416) 0:01:56.421 ******** 2026-04-09 07:21:33.687071 | orchestrator | 2026-04-09 07:21:33.687082 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-09 07:21:33.687094 | orchestrator | Thursday 09 April 2026 07:20:53 +0000 (0:00:00.419) 0:01:56.841 ******** 2026-04-09 07:21:33.687105 | orchestrator | 2026-04-09 07:21:33.687116 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-09 07:21:33.687171 | orchestrator | Thursday 09 April 2026 07:20:53 +0000 (0:00:00.438) 0:01:57.279 ******** 2026-04-09 07:21:33.687185 | orchestrator | 2026-04-09 07:21:33.687198 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-04-09 07:21:33.687211 | orchestrator | Thursday 09 April 2026 07:20:54 +0000 (0:00:00.828) 0:01:58.108 ******** 2026-04-09 07:21:33.687225 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:21:33.687238 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:21:33.687251 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:21:33.687264 | orchestrator | 2026-04-09 07:21:33.687276 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-04-09 07:21:33.687289 | orchestrator | Thursday 09 April 2026 07:21:07 +0000 (0:00:13.017) 0:02:11.125 ******** 2026-04-09 07:21:33.687302 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:21:33.687315 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:21:33.687328 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:21:33.687340 | orchestrator | 2026-04-09 07:21:33.687354 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-04-09 07:21:33.687367 | orchestrator | Thursday 09 April 2026 07:21:20 +0000 (0:00:12.472) 0:02:23.597 ******** 2026-04-09 07:21:33.687379 | orchestrator | changed: [testbed-node-3] 2026-04-09 07:21:33.687392 | orchestrator | changed: [testbed-node-4] 2026-04-09 07:21:33.687405 | orchestrator | changed: [testbed-node-5] 2026-04-09 07:21:33.687418 | orchestrator | 2026-04-09 07:21:33.687431 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 07:21:33.687445 | orchestrator | testbed-node-0 : ok=26  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-09 07:21:33.687461 | orchestrator | testbed-node-1 : ok=24  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 07:21:33.687474 | orchestrator | testbed-node-2 : ok=24  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 07:21:33.687493 | orchestrator | testbed-node-3 : ok=21  changed=5  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-09 07:21:33.687505 | orchestrator | testbed-node-4 : ok=21  changed=5  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-09 07:21:33.687516 | orchestrator | testbed-node-5 : ok=21  changed=5  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-09 07:21:33.687527 | orchestrator | 2026-04-09 07:21:33.687538 | orchestrator | 2026-04-09 07:21:33.687549 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 07:21:33.687561 | orchestrator | Thursday 09 April 2026 07:21:33 +0000 (0:00:13.470) 0:02:37.068 ******** 2026-04-09 07:21:33.687572 | orchestrator | =============================================================================== 2026-04-09 07:21:33.687583 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 13.47s 2026-04-09 07:21:33.687594 | orchestrator | ceilometer : Restart ceilometer-notification container ----------------- 13.02s 2026-04-09 07:21:33.687605 | orchestrator | ceilometer : Restart ceilometer-central container ---------------------- 12.47s 2026-04-09 07:21:33.687616 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 9.90s 2026-04-09 07:21:33.687627 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 6.16s 2026-04-09 07:21:33.687638 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 4.34s 2026-04-09 07:21:33.687649 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 4.24s 2026-04-09 07:21:33.687660 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 3.35s 2026-04-09 07:21:33.687671 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 3.27s 2026-04-09 07:21:33.687689 | orchestrator | ceilometer : Flush handlers --------------------------------------------- 3.20s 2026-04-09 07:21:33.687700 | orchestrator | service-check-containers : ceilometer | Check containers ---------------- 3.19s 2026-04-09 07:21:33.687711 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 2.97s 2026-04-09 07:21:33.687722 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 2.96s 2026-04-09 07:21:33.687733 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 2.85s 2026-04-09 07:21:33.687744 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.82s 2026-04-09 07:21:33.687756 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.82s 2026-04-09 07:21:33.687774 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 2.79s 2026-04-09 07:21:34.099104 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 2.73s 2026-04-09 07:21:34.099238 | orchestrator | ceilometer : Check custom gnocchi_resources.yaml exists ----------------- 2.72s 2026-04-09 07:21:34.099250 | orchestrator | ceilometer : include_tasks ---------------------------------------------- 2.71s 2026-04-09 07:21:34.297952 | orchestrator | + osism apply -a upgrade aodh 2026-04-09 07:21:35.654649 | orchestrator | 2026-04-09 07:21:35 | INFO  | Prepare task for execution of aodh. 2026-04-09 07:21:35.719339 | orchestrator | 2026-04-09 07:21:35 | INFO  | Task 412424bf-2d6a-484e-ba87-41da42a4ee69 (aodh) was prepared for execution. 2026-04-09 07:21:35.719451 | orchestrator | 2026-04-09 07:21:35 | INFO  | It takes a moment until task 412424bf-2d6a-484e-ba87-41da42a4ee69 (aodh) has been started and output is visible here. 2026-04-09 07:21:45.521948 | orchestrator | 2026-04-09 07:21:45.522167 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 07:21:45.522190 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-09 07:21:45.522203 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-09 07:21:45.522224 | orchestrator | 2026-04-09 07:21:45.522234 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 07:21:45.522244 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-09 07:21:45.522253 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-09 07:21:45.522273 | orchestrator | Thursday 09 April 2026 07:21:40 +0000 (0:00:01.059) 0:00:01.059 ******** 2026-04-09 07:21:45.522283 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:21:45.522294 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:21:45.522303 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:21:45.522314 | orchestrator | 2026-04-09 07:21:45.522324 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 07:21:45.522334 | orchestrator | Thursday 09 April 2026 07:21:41 +0000 (0:00:00.946) 0:00:02.006 ******** 2026-04-09 07:21:45.522344 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-04-09 07:21:45.522354 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-04-09 07:21:45.522364 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-04-09 07:21:45.522373 | orchestrator | 2026-04-09 07:21:45.522383 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-04-09 07:21:45.522393 | orchestrator | 2026-04-09 07:21:45.522402 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-09 07:21:45.522412 | orchestrator | Thursday 09 April 2026 07:21:41 +0000 (0:00:00.825) 0:00:02.832 ******** 2026-04-09 07:21:45.522438 | orchestrator | included: /ansible/roles/aodh/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 07:21:45.522449 | orchestrator | 2026-04-09 07:21:45.522482 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-04-09 07:21:45.522495 | orchestrator | Thursday 09 April 2026 07:21:43 +0000 (0:00:01.421) 0:00:04.254 ******** 2026-04-09 07:21:45.522510 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:21:45.522529 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:21:45.522561 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:21:45.522574 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 07:21:45.522592 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 07:21:45.522611 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 07:21:45.522624 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:21:45.522635 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:21:45.522654 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:21:47.243732 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 07:21:47.243838 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 07:21:47.243894 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 07:21:47.243909 | orchestrator | 2026-04-09 07:21:47.243923 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-04-09 07:21:47.243935 | orchestrator | Thursday 09 April 2026 07:21:45 +0000 (0:00:02.646) 0:00:06.901 ******** 2026-04-09 07:21:47.243947 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:21:47.243959 | orchestrator | 2026-04-09 07:21:47.243970 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-04-09 07:21:47.243981 | orchestrator | Thursday 09 April 2026 07:21:46 +0000 (0:00:00.114) 0:00:07.015 ******** 2026-04-09 07:21:47.243992 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:21:47.244004 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:21:47.244015 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:21:47.244026 | orchestrator | 2026-04-09 07:21:47.244037 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-04-09 07:21:47.244048 | orchestrator | Thursday 09 April 2026 07:21:46 +0000 (0:00:00.308) 0:00:07.324 ******** 2026-04-09 07:21:47.244060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:21:47.244076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 07:21:47.244106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 07:21:47.244154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 07:21:47.244168 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:21:47.244186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:21:47.244199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 07:21:47.244211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 07:21:47.244223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 07:21:47.244234 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:21:47.244256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:21:51.314446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 07:21:51.314559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 07:21:51.314578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 07:21:51.314591 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:21:51.314606 | orchestrator | 2026-04-09 07:21:51.314619 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-09 07:21:51.314631 | orchestrator | Thursday 09 April 2026 07:21:47 +0000 (0:00:00.998) 0:00:08.322 ******** 2026-04-09 07:21:51.314643 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 07:21:51.314655 | orchestrator | 2026-04-09 07:21:51.314667 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-04-09 07:21:51.314678 | orchestrator | Thursday 09 April 2026 07:21:48 +0000 (0:00:00.939) 0:00:09.262 ******** 2026-04-09 07:21:51.314690 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:21:51.314747 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:21:51.314769 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:21:51.314782 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 07:21:51.314795 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 07:21:51.314806 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 07:21:51.314827 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:21:51.314846 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:21:53.360503 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:21:53.360611 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 07:21:53.360628 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 07:21:53.360641 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 07:21:53.360653 | orchestrator | 2026-04-09 07:21:53.360667 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-04-09 07:21:53.360680 | orchestrator | Thursday 09 April 2026 07:21:52 +0000 (0:00:04.143) 0:00:13.405 ******** 2026-04-09 07:21:53.360693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:21:53.360749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 07:21:53.360770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 07:21:53.360782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:21:53.360795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:21:53.360815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 07:21:53.360827 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:21:53.360841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 07:21:53.360868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 07:21:54.302270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 07:21:54.302379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 07:21:54.302396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 07:21:54.302434 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:21:54.302449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 07:21:54.302460 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:21:54.302473 | orchestrator | 2026-04-09 07:21:54.302485 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-04-09 07:21:54.302497 | orchestrator | Thursday 09 April 2026 07:21:53 +0000 (0:00:01.176) 0:00:14.581 ******** 2026-04-09 07:21:54.302510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:21:54.302558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 07:21:54.302573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:21:54.302585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 07:21:54.302604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 07:21:54.302616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:21:54.302633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 07:21:54.302645 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:21:54.302666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 07:21:58.356345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 07:21:58.356465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 07:21:58.356510 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:21:58.356526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 07:21:58.356539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 07:21:58.356550 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:21:58.356566 | orchestrator | 2026-04-09 07:21:58.356586 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-04-09 07:21:58.356599 | orchestrator | Thursday 09 April 2026 07:21:54 +0000 (0:00:01.163) 0:00:15.745 ******** 2026-04-09 07:21:58.356628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:21:58.356694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:21:58.356719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:21:58.356732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 07:21:58.356744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 07:21:58.356762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 07:21:58.356774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:21:58.356793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:22:06.140831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:22:06.140925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 07:22:06.140943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 07:22:06.140956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 07:22:06.140968 | orchestrator | 2026-04-09 07:22:06.140981 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-04-09 07:22:06.140994 | orchestrator | Thursday 09 April 2026 07:21:59 +0000 (0:00:04.868) 0:00:20.613 ******** 2026-04-09 07:22:06.141020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:22:06.141052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:22:06.141084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:22:06.141097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 07:22:06.141109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 07:22:06.141125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 07:22:06.141188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:22:06.141219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:22:13.168254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:22:13.168346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 07:22:13.168362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 07:22:13.168389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 07:22:13.168402 | orchestrator | 2026-04-09 07:22:13.168415 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-04-09 07:22:13.168428 | orchestrator | Thursday 09 April 2026 07:22:08 +0000 (0:00:08.686) 0:00:29.300 ******** 2026-04-09 07:22:13.168439 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:22:13.168451 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:22:13.168462 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:22:13.168473 | orchestrator | 2026-04-09 07:22:13.168485 | orchestrator | TASK [service-check-containers : aodh | Check containers] ********************** 2026-04-09 07:22:13.168516 | orchestrator | Thursday 09 April 2026 07:22:10 +0000 (0:00:01.968) 0:00:31.268 ******** 2026-04-09 07:22:13.168529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:22:13.168562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:22:13.168576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:22:13.168588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 07:22:13.168606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 07:22:13.168626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-09 07:22:13.168645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:22:15.212911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:22:15.213014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-09 07:22:15.213040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 07:22:15.213074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 07:22:15.213106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-09 07:22:15.213119 | orchestrator | 2026-04-09 07:22:15.213132 | orchestrator | TASK [service-check-containers : aodh | Notify handlers to restart containers] *** 2026-04-09 07:22:15.213191 | orchestrator | Thursday 09 April 2026 07:22:14 +0000 (0:00:04.054) 0:00:35.323 ******** 2026-04-09 07:22:15.213203 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 07:22:15.213215 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:22:15.213227 | orchestrator | } 2026-04-09 07:22:15.213238 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 07:22:15.213250 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:22:15.213261 | orchestrator | } 2026-04-09 07:22:15.213272 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 07:22:15.213283 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:22:15.213293 | orchestrator | } 2026-04-09 07:22:15.213305 | orchestrator | 2026-04-09 07:22:15.213316 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 07:22:15.213327 | orchestrator | Thursday 09 April 2026 07:22:14 +0000 (0:00:00.439) 0:00:35.763 ******** 2026-04-09 07:22:15.213358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:22:15.213374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 07:22:15.213386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 07:22:15.213411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 07:22:15.213423 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:22:15.213435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:22:15.213448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 07:22:15.213469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 07:23:27.742128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 07:23:27.742276 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:23:27.742294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:23:27.742345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 07:23:27.742357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 07:23:27.742367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 07:23:27.742378 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:23:27.742388 | orchestrator | 2026-04-09 07:23:27.742399 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-04-09 07:23:27.742411 | orchestrator | Thursday 09 April 2026 07:22:15 +0000 (0:00:01.164) 0:00:36.927 ******** 2026-04-09 07:23:27.742421 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:23:27.742431 | orchestrator | 2026-04-09 07:23:27.742441 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-09 07:23:27.742451 | orchestrator | Thursday 09 April 2026 07:22:32 +0000 (0:00:16.540) 0:00:53.468 ******** 2026-04-09 07:23:27.742461 | orchestrator | 2026-04-09 07:23:27.742471 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-09 07:23:27.742481 | orchestrator | Thursday 09 April 2026 07:22:32 +0000 (0:00:00.088) 0:00:53.557 ******** 2026-04-09 07:23:27.742490 | orchestrator | 2026-04-09 07:23:27.742515 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-09 07:23:27.742526 | orchestrator | Thursday 09 April 2026 07:22:32 +0000 (0:00:00.074) 0:00:53.631 ******** 2026-04-09 07:23:27.742536 | orchestrator | 2026-04-09 07:23:27.742546 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-04-09 07:23:27.742556 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-09 07:23:27.742566 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-09 07:23:27.742594 | orchestrator | Thursday 09 April 2026 07:22:32 +0000 (0:00:00.256) 0:00:53.888 ******** 2026-04-09 07:23:27.742603 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:23:27.742613 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:23:27.742625 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:23:27.742638 | orchestrator | 2026-04-09 07:23:27.742650 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-04-09 07:23:27.742661 | orchestrator | Thursday 09 April 2026 07:22:45 +0000 (0:00:12.107) 0:01:05.996 ******** 2026-04-09 07:23:27.742673 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:23:27.742685 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:23:27.742696 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:23:27.742708 | orchestrator | 2026-04-09 07:23:27.742720 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-04-09 07:23:27.742731 | orchestrator | Thursday 09 April 2026 07:22:56 +0000 (0:00:11.875) 0:01:17.871 ******** 2026-04-09 07:23:27.742743 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:23:27.742754 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:23:27.742765 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:23:27.742776 | orchestrator | 2026-04-09 07:23:27.742788 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-04-09 07:23:27.742800 | orchestrator | Thursday 09 April 2026 07:23:08 +0000 (0:00:11.989) 0:01:29.861 ******** 2026-04-09 07:23:27.742812 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:23:27.742823 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:23:27.742835 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:23:27.742845 | orchestrator | 2026-04-09 07:23:27.742857 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 07:23:27.742869 | orchestrator | testbed-node-0 : ok=16  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 07:23:27.742886 | orchestrator | testbed-node-1 : ok=15  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 07:23:27.742898 | orchestrator | testbed-node-2 : ok=15  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 07:23:27.742910 | orchestrator | 2026-04-09 07:23:27.742922 | orchestrator | 2026-04-09 07:23:27.742933 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 07:23:27.742944 | orchestrator | Thursday 09 April 2026 07:23:27 +0000 (0:00:18.518) 0:01:48.379 ******** 2026-04-09 07:23:27.742956 | orchestrator | =============================================================================== 2026-04-09 07:23:27.742968 | orchestrator | aodh : Restart aodh-notifier container --------------------------------- 18.52s 2026-04-09 07:23:27.742978 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 16.54s 2026-04-09 07:23:27.742988 | orchestrator | aodh : Restart aodh-api container -------------------------------------- 12.11s 2026-04-09 07:23:27.742998 | orchestrator | aodh : Restart aodh-listener container --------------------------------- 11.99s 2026-04-09 07:23:27.743007 | orchestrator | aodh : Restart aodh-evaluator container -------------------------------- 11.88s 2026-04-09 07:23:27.743017 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 8.69s 2026-04-09 07:23:27.743027 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.87s 2026-04-09 07:23:27.743037 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.14s 2026-04-09 07:23:27.743046 | orchestrator | service-check-containers : aodh | Check containers ---------------------- 4.05s 2026-04-09 07:23:27.743056 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 2.65s 2026-04-09 07:23:27.743066 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.97s 2026-04-09 07:23:27.743085 | orchestrator | aodh : include_tasks ---------------------------------------------------- 1.42s 2026-04-09 07:23:27.743096 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS certificate --- 1.18s 2026-04-09 07:23:27.743105 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.16s 2026-04-09 07:23:27.743115 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 1.16s 2026-04-09 07:23:27.743125 | orchestrator | aodh : Copying over existing policy file -------------------------------- 1.00s 2026-04-09 07:23:27.743134 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.95s 2026-04-09 07:23:27.743144 | orchestrator | aodh : include_tasks ---------------------------------------------------- 0.94s 2026-04-09 07:23:27.743153 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.83s 2026-04-09 07:23:27.743182 | orchestrator | service-check-containers : aodh | Notify handlers to restart containers --- 0.44s 2026-04-09 07:23:27.929440 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-09 07:23:27.980439 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-09 07:23:27.980528 | orchestrator | + osism apply -a bootstrap octavia 2026-04-09 07:23:29.357224 | orchestrator | 2026-04-09 07:23:29 | INFO  | Prepare task for execution of octavia. 2026-04-09 07:23:29.424754 | orchestrator | 2026-04-09 07:23:29 | INFO  | Task f2bb2b3d-8f98-43d6-b8c1-b10cf7dbd6d9 (octavia) was prepared for execution. 2026-04-09 07:23:29.424876 | orchestrator | 2026-04-09 07:23:29 | INFO  | It takes a moment until task f2bb2b3d-8f98-43d6-b8c1-b10cf7dbd6d9 (octavia) has been started and output is visible here. 2026-04-09 07:24:09.507694 | orchestrator | 2026-04-09 07:24:09.507816 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 07:24:09.507835 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-09 07:24:09.507849 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-09 07:24:09.507873 | orchestrator | 2026-04-09 07:24:09.507885 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 07:24:09.507896 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-09 07:24:09.507907 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-09 07:24:09.507930 | orchestrator | Thursday 09 April 2026 07:23:33 +0000 (0:00:01.082) 0:00:01.082 ******** 2026-04-09 07:24:09.507941 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:24:09.507954 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:24:09.507965 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:24:09.507977 | orchestrator | 2026-04-09 07:24:09.507988 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 07:24:09.507999 | orchestrator | Thursday 09 April 2026 07:23:34 +0000 (0:00:00.804) 0:00:01.886 ******** 2026-04-09 07:24:09.508011 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-09 07:24:09.508022 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-09 07:24:09.508033 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-09 07:24:09.508044 | orchestrator | 2026-04-09 07:24:09.508056 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-09 07:24:09.508067 | orchestrator | 2026-04-09 07:24:09.508079 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-09 07:24:09.508090 | orchestrator | Thursday 09 April 2026 07:23:35 +0000 (0:00:00.735) 0:00:02.622 ******** 2026-04-09 07:24:09.508101 | orchestrator | included: /ansible/roles/octavia/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 07:24:09.508113 | orchestrator | 2026-04-09 07:24:09.508142 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-09 07:24:09.508154 | orchestrator | Thursday 09 April 2026 07:23:36 +0000 (0:00:01.028) 0:00:03.650 ******** 2026-04-09 07:24:09.508221 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:24:09.508236 | orchestrator | 2026-04-09 07:24:09.508249 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-09 07:24:09.508263 | orchestrator | Thursday 09 April 2026 07:23:38 +0000 (0:00:02.565) 0:00:06.215 ******** 2026-04-09 07:24:09.508276 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:24:09.508289 | orchestrator | 2026-04-09 07:24:09.508302 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-09 07:24:09.508315 | orchestrator | Thursday 09 April 2026 07:23:41 +0000 (0:00:02.194) 0:00:08.410 ******** 2026-04-09 07:24:09.508329 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:24:09.508342 | orchestrator | 2026-04-09 07:24:09.508355 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-09 07:24:09.508368 | orchestrator | Thursday 09 April 2026 07:23:43 +0000 (0:00:02.106) 0:00:10.517 ******** 2026-04-09 07:24:09.508381 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:24:09.508394 | orchestrator | 2026-04-09 07:24:09.508408 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-09 07:24:09.508421 | orchestrator | Thursday 09 April 2026 07:23:45 +0000 (0:00:02.671) 0:00:13.188 ******** 2026-04-09 07:24:09.508434 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:24:09.508447 | orchestrator | 2026-04-09 07:24:09.508460 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 07:24:09.508474 | orchestrator | testbed-node-0 : ok=8  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 07:24:09.508489 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 07:24:09.508504 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 07:24:09.508517 | orchestrator | 2026-04-09 07:24:09.508530 | orchestrator | 2026-04-09 07:24:09.508544 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 07:24:09.508557 | orchestrator | Thursday 09 April 2026 07:24:09 +0000 (0:00:23.133) 0:00:36.322 ******** 2026-04-09 07:24:09.508571 | orchestrator | =============================================================================== 2026-04-09 07:24:09.508582 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 23.13s 2026-04-09 07:24:09.508593 | orchestrator | octavia : Creating Octavia persistence database user and setting permissions --- 2.67s 2026-04-09 07:24:09.508604 | orchestrator | octavia : Creating Octavia database ------------------------------------- 2.57s 2026-04-09 07:24:09.508616 | orchestrator | octavia : Creating Octavia persistence database ------------------------- 2.19s 2026-04-09 07:24:09.508627 | orchestrator | octavia : Creating Octavia database user and setting permissions -------- 2.11s 2026-04-09 07:24:09.508638 | orchestrator | octavia : include_tasks ------------------------------------------------- 1.03s 2026-04-09 07:24:09.508649 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.80s 2026-04-09 07:24:09.508660 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2026-04-09 07:24:09.679511 | orchestrator | + osism apply -a upgrade octavia 2026-04-09 07:24:10.977842 | orchestrator | 2026-04-09 07:24:10 | INFO  | Prepare task for execution of octavia. 2026-04-09 07:24:11.046918 | orchestrator | 2026-04-09 07:24:11 | INFO  | Task 682a5f8f-fe9e-40d7-b5e0-e00340d74176 (octavia) was prepared for execution. 2026-04-09 07:24:11.047031 | orchestrator | 2026-04-09 07:24:11 | INFO  | It takes a moment until task 682a5f8f-fe9e-40d7-b5e0-e00340d74176 (octavia) has been started and output is visible here. 2026-04-09 07:24:51.128030 | orchestrator | 2026-04-09 07:24:51.128152 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 07:24:51.128221 | orchestrator | 2026-04-09 07:24:51.128233 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 07:24:51.128270 | orchestrator | Thursday 09 April 2026 07:24:15 +0000 (0:00:01.606) 0:00:01.606 ******** 2026-04-09 07:24:51.128282 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:24:51.128294 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:24:51.128305 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:24:51.128316 | orchestrator | 2026-04-09 07:24:51.128328 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 07:24:51.128339 | orchestrator | Thursday 09 April 2026 07:24:17 +0000 (0:00:01.831) 0:00:03.437 ******** 2026-04-09 07:24:51.128350 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-09 07:24:51.128362 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-09 07:24:51.128373 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-09 07:24:51.128384 | orchestrator | 2026-04-09 07:24:51.128395 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-09 07:24:51.128406 | orchestrator | 2026-04-09 07:24:51.128418 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-09 07:24:51.128429 | orchestrator | Thursday 09 April 2026 07:24:20 +0000 (0:00:02.328) 0:00:05.766 ******** 2026-04-09 07:24:51.128441 | orchestrator | included: /ansible/roles/octavia/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 07:24:51.128453 | orchestrator | 2026-04-09 07:24:51.128464 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-09 07:24:51.128475 | orchestrator | Thursday 09 April 2026 07:24:23 +0000 (0:00:03.281) 0:00:09.047 ******** 2026-04-09 07:24:51.128502 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 07:24:51.128514 | orchestrator | 2026-04-09 07:24:51.128525 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-09 07:24:51.128536 | orchestrator | Thursday 09 April 2026 07:24:25 +0000 (0:00:01.928) 0:00:10.976 ******** 2026-04-09 07:24:51.128547 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:24:51.128558 | orchestrator | 2026-04-09 07:24:51.128570 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-09 07:24:51.128583 | orchestrator | Thursday 09 April 2026 07:24:30 +0000 (0:00:05.378) 0:00:16.354 ******** 2026-04-09 07:24:51.128596 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:24:51.128609 | orchestrator | 2026-04-09 07:24:51.128623 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-09 07:24:51.128635 | orchestrator | Thursday 09 April 2026 07:24:35 +0000 (0:00:04.324) 0:00:20.679 ******** 2026-04-09 07:24:51.128648 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-09 07:24:51.128662 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-09 07:24:51.128675 | orchestrator | 2026-04-09 07:24:51.128689 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-09 07:24:51.128702 | orchestrator | Thursday 09 April 2026 07:24:43 +0000 (0:00:08.137) 0:00:28.816 ******** 2026-04-09 07:24:51.128714 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:24:51.128727 | orchestrator | 2026-04-09 07:24:51.128738 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-09 07:24:51.128749 | orchestrator | Thursday 09 April 2026 07:24:47 +0000 (0:00:04.551) 0:00:33.368 ******** 2026-04-09 07:24:51.128760 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:24:51.128771 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:24:51.128782 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:24:51.128793 | orchestrator | 2026-04-09 07:24:51.128805 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-09 07:24:51.128816 | orchestrator | Thursday 09 April 2026 07:24:49 +0000 (0:00:01.508) 0:00:34.877 ******** 2026-04-09 07:24:51.128831 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 07:24:51.128873 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 07:24:51.128888 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 07:24:51.128905 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 07:24:51.128918 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 07:24:51.128931 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 07:24:51.128950 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 07:24:51.128971 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 07:24:55.847263 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 07:24:55.847392 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 07:24:55.847413 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 07:24:55.847426 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:24:55.847460 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 07:24:55.847472 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:24:55.847503 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:24:55.847516 | orchestrator | 2026-04-09 07:24:55.847530 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-09 07:24:55.847543 | orchestrator | Thursday 09 April 2026 07:24:52 +0000 (0:00:03.770) 0:00:38.648 ******** 2026-04-09 07:24:55.847554 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:24:55.847566 | orchestrator | 2026-04-09 07:24:55.847578 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-09 07:24:55.847589 | orchestrator | Thursday 09 April 2026 07:24:54 +0000 (0:00:01.102) 0:00:39.750 ******** 2026-04-09 07:24:55.847600 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:24:55.847611 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:24:55.847622 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:24:55.847634 | orchestrator | 2026-04-09 07:24:55.847645 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-09 07:24:55.847656 | orchestrator | Thursday 09 April 2026 07:24:55 +0000 (0:00:01.354) 0:00:41.105 ******** 2026-04-09 07:24:55.847674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 07:24:55.847697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 07:24:55.847711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 07:24:55.847723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 07:24:55.847743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:25:00.462715 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:25:00.462845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 07:25:00.462871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 07:25:00.462907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 07:25:00.462921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 07:25:00.462933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:25:00.462945 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:25:00.462977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 07:25:00.462997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 07:25:00.463009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 07:25:00.463029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 07:25:00.463040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:25:00.463052 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:25:00.463063 | orchestrator | 2026-04-09 07:25:00.463075 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-09 07:25:00.463088 | orchestrator | Thursday 09 April 2026 07:24:57 +0000 (0:00:01.716) 0:00:42.821 ******** 2026-04-09 07:25:00.463100 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 07:25:00.463112 | orchestrator | 2026-04-09 07:25:00.463123 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-09 07:25:00.463134 | orchestrator | Thursday 09 April 2026 07:24:58 +0000 (0:00:01.679) 0:00:44.501 ******** 2026-04-09 07:25:00.463154 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 07:25:03.756578 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 07:25:03.756691 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 07:25:03.756704 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 07:25:03.756714 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 07:25:03.756723 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 07:25:03.756747 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 07:25:03.756763 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 07:25:03.756779 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 07:25:03.756787 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 07:25:03.756796 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 07:25:03.756804 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 07:25:03.756813 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:25:03.756832 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:25:05.562946 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:25:05.563030 | orchestrator | 2026-04-09 07:25:05.563040 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-09 07:25:05.563048 | orchestrator | Thursday 09 April 2026 07:25:04 +0000 (0:00:06.101) 0:00:50.603 ******** 2026-04-09 07:25:05.563058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 07:25:05.563069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 07:25:05.563078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 07:25:05.563086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 07:25:05.563141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:25:05.563149 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:25:05.563257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 07:25:05.563268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 07:25:05.563276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 07:25:05.563283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 07:25:05.563291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:25:05.563304 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:25:05.563324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 07:25:07.116469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 07:25:07.116564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 07:25:07.116580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 07:25:07.116593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:25:07.116628 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:25:07.116642 | orchestrator | 2026-04-09 07:25:07.116655 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-09 07:25:07.116667 | orchestrator | Thursday 09 April 2026 07:25:06 +0000 (0:00:01.687) 0:00:52.290 ******** 2026-04-09 07:25:07.116691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 07:25:07.116723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 07:25:07.116737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 07:25:07.116749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 07:25:07.116760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:25:07.116772 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:25:07.116784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 07:25:07.116807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 07:25:07.116827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 07:25:10.689201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 07:25:10.689302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:25:10.689319 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:25:10.689335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 07:25:10.689370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 07:25:10.689397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 07:25:10.689427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 07:25:10.689440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:25:10.689451 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:25:10.689463 | orchestrator | 2026-04-09 07:25:10.689475 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-09 07:25:10.689487 | orchestrator | Thursday 09 April 2026 07:25:08 +0000 (0:00:01.645) 0:00:53.936 ******** 2026-04-09 07:25:10.689499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 07:25:10.689519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 07:25:10.689536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 07:25:10.689558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 07:25:20.760501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 07:25:20.760612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 07:25:20.760631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 07:25:20.760669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 07:25:20.760698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 07:25:20.760711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 07:25:20.760741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 07:25:20.760781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 07:25:20.760794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:25:20.760814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:25:20.760826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:25:20.760838 | orchestrator | 2026-04-09 07:25:20.760851 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-09 07:25:20.760864 | orchestrator | Thursday 09 April 2026 07:25:14 +0000 (0:00:06.210) 0:01:00.147 ******** 2026-04-09 07:25:20.760880 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-09 07:25:20.760892 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-09 07:25:20.760904 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-09 07:25:20.760915 | orchestrator | 2026-04-09 07:25:20.760926 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-09 07:25:20.760937 | orchestrator | Thursday 09 April 2026 07:25:17 +0000 (0:00:02.593) 0:01:02.740 ******** 2026-04-09 07:25:20.760956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 07:25:34.591864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 07:25:34.592027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 07:25:34.592052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 07:25:34.592086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 07:25:34.592102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 07:25:34.592138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 07:25:34.592222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 07:25:34.592240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 07:25:34.592257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 07:25:34.592280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 07:25:34.592296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 07:25:34.592312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:25:34.592337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:25:59.418879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:25:59.418999 | orchestrator | 2026-04-09 07:25:59.419017 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-09 07:25:59.419030 | orchestrator | Thursday 09 April 2026 07:25:35 +0000 (0:00:18.698) 0:01:21.439 ******** 2026-04-09 07:25:59.419042 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:25:59.419054 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:25:59.419065 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:25:59.419076 | orchestrator | 2026-04-09 07:25:59.419087 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-09 07:25:59.419098 | orchestrator | Thursday 09 April 2026 07:25:38 +0000 (0:00:02.734) 0:01:24.173 ******** 2026-04-09 07:25:59.419110 | orchestrator | ok: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-09 07:25:59.419121 | orchestrator | ok: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-09 07:25:59.419132 | orchestrator | ok: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-09 07:25:59.419143 | orchestrator | ok: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-09 07:25:59.419155 | orchestrator | ok: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-09 07:25:59.419244 | orchestrator | ok: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-09 07:25:59.419256 | orchestrator | ok: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-09 07:25:59.419267 | orchestrator | ok: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-09 07:25:59.419278 | orchestrator | ok: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-09 07:25:59.419289 | orchestrator | ok: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-09 07:25:59.419301 | orchestrator | ok: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-09 07:25:59.419312 | orchestrator | ok: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-09 07:25:59.419323 | orchestrator | 2026-04-09 07:25:59.419334 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-09 07:25:59.419345 | orchestrator | Thursday 09 April 2026 07:25:44 +0000 (0:00:05.864) 0:01:30.038 ******** 2026-04-09 07:25:59.419356 | orchestrator | ok: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-09 07:25:59.419367 | orchestrator | ok: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-09 07:25:59.419378 | orchestrator | ok: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-09 07:25:59.419407 | orchestrator | ok: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-09 07:25:59.419421 | orchestrator | ok: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-09 07:25:59.419435 | orchestrator | ok: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-09 07:25:59.419447 | orchestrator | ok: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-09 07:25:59.419460 | orchestrator | ok: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-09 07:25:59.419473 | orchestrator | ok: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-09 07:25:59.419507 | orchestrator | ok: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-09 07:25:59.419521 | orchestrator | ok: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-09 07:25:59.419534 | orchestrator | ok: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-09 07:25:59.419545 | orchestrator | 2026-04-09 07:25:59.419557 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-09 07:25:59.419568 | orchestrator | Thursday 09 April 2026 07:25:50 +0000 (0:00:06.005) 0:01:36.043 ******** 2026-04-09 07:25:59.419579 | orchestrator | ok: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-09 07:25:59.419590 | orchestrator | ok: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-09 07:25:59.419600 | orchestrator | ok: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-09 07:25:59.419611 | orchestrator | ok: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-09 07:25:59.419622 | orchestrator | ok: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-09 07:25:59.419633 | orchestrator | ok: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-09 07:25:59.419644 | orchestrator | ok: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-09 07:25:59.419654 | orchestrator | ok: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-09 07:25:59.419665 | orchestrator | ok: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-09 07:25:59.419676 | orchestrator | ok: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-09 07:25:59.419687 | orchestrator | ok: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-09 07:25:59.419698 | orchestrator | ok: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-09 07:25:59.419709 | orchestrator | 2026-04-09 07:25:59.419720 | orchestrator | TASK [service-check-containers : octavia | Check containers] ******************* 2026-04-09 07:25:59.419731 | orchestrator | Thursday 09 April 2026 07:25:56 +0000 (0:00:06.560) 0:01:42.604 ******** 2026-04-09 07:25:59.419760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 07:25:59.419777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 07:25:59.419795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 07:25:59.419817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 07:25:59.419831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 07:25:59.419849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 07:26:05.110948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 07:26:05.111067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 07:26:05.111084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 07:26:05.111138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 07:26:05.111153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 07:26:05.111213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 07:26:05.111245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:26:05.111259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:26:05.111271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 07:26:05.111293 | orchestrator | 2026-04-09 07:26:05.111306 | orchestrator | TASK [service-check-containers : octavia | Notify handlers to restart containers] *** 2026-04-09 07:26:05.111319 | orchestrator | Thursday 09 April 2026 07:26:03 +0000 (0:00:06.295) 0:01:48.899 ******** 2026-04-09 07:26:05.111332 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 07:26:05.111344 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:26:05.111356 | orchestrator | } 2026-04-09 07:26:05.111367 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 07:26:05.111379 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:26:05.111395 | orchestrator | } 2026-04-09 07:26:05.111407 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 07:26:05.111418 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:26:05.111429 | orchestrator | } 2026-04-09 07:26:05.111441 | orchestrator | 2026-04-09 07:26:05.111453 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 07:26:05.111464 | orchestrator | Thursday 09 April 2026 07:26:04 +0000 (0:00:01.440) 0:01:50.340 ******** 2026-04-09 07:26:05.111477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 07:26:05.111493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 07:26:05.111514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 07:26:05.306197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 07:26:05.306321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:26:05.306337 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:26:05.306368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 07:26:05.306385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 07:26:05.306398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 07:26:05.306426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 07:26:05.306438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:26:05.306459 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:26:05.306476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 07:26:05.306489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 07:26:05.306500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 07:26:05.306512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 07:26:05.306530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 07:27:36.081514 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:27:36.081634 | orchestrator | 2026-04-09 07:27:36.081653 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-09 07:27:36.081668 | orchestrator | Thursday 09 April 2026 07:26:06 +0000 (0:00:02.235) 0:01:52.576 ******** 2026-04-09 07:27:36.081680 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:27:36.081691 | orchestrator | 2026-04-09 07:27:36.081703 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-09 07:27:36.081714 | orchestrator | Thursday 09 April 2026 07:26:20 +0000 (0:00:13.842) 0:02:06.419 ******** 2026-04-09 07:27:36.081725 | orchestrator | 2026-04-09 07:27:36.081736 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-09 07:27:36.081747 | orchestrator | Thursday 09 April 2026 07:26:21 +0000 (0:00:00.422) 0:02:06.841 ******** 2026-04-09 07:27:36.081758 | orchestrator | 2026-04-09 07:27:36.081768 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-09 07:27:36.081779 | orchestrator | Thursday 09 April 2026 07:26:21 +0000 (0:00:00.492) 0:02:07.333 ******** 2026-04-09 07:27:36.081790 | orchestrator | 2026-04-09 07:27:36.081801 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-09 07:27:36.081812 | orchestrator | Thursday 09 April 2026 07:26:22 +0000 (0:00:00.810) 0:02:08.144 ******** 2026-04-09 07:27:36.081823 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:27:36.081834 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:27:36.081845 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:27:36.081856 | orchestrator | 2026-04-09 07:27:36.081867 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-09 07:27:36.081878 | orchestrator | Thursday 09 April 2026 07:26:41 +0000 (0:00:18.900) 0:02:27.045 ******** 2026-04-09 07:27:36.081889 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:27:36.081900 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:27:36.081911 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:27:36.081922 | orchestrator | 2026-04-09 07:27:36.081933 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-09 07:27:36.081944 | orchestrator | Thursday 09 April 2026 07:26:55 +0000 (0:00:14.567) 0:02:41.612 ******** 2026-04-09 07:27:36.081955 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:27:36.081966 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:27:36.081977 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:27:36.081988 | orchestrator | 2026-04-09 07:27:36.082078 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-09 07:27:36.082094 | orchestrator | Thursday 09 April 2026 07:27:09 +0000 (0:00:13.310) 0:02:54.923 ******** 2026-04-09 07:27:36.082107 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:27:36.082121 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:27:36.082135 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:27:36.082148 | orchestrator | 2026-04-09 07:27:36.082161 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-09 07:27:36.082208 | orchestrator | Thursday 09 April 2026 07:27:22 +0000 (0:00:12.914) 0:03:07.837 ******** 2026-04-09 07:27:36.082228 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:27:36.082241 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:27:36.082254 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:27:36.082266 | orchestrator | 2026-04-09 07:27:36.082279 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 07:27:36.082294 | orchestrator | testbed-node-0 : ok=27  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 07:27:36.082309 | orchestrator | testbed-node-1 : ok=22  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 07:27:36.082323 | orchestrator | testbed-node-2 : ok=22  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 07:27:36.082361 | orchestrator | 2026-04-09 07:27:36.082374 | orchestrator | 2026-04-09 07:27:36.082387 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 07:27:36.082400 | orchestrator | Thursday 09 April 2026 07:27:35 +0000 (0:00:13.421) 0:03:21.258 ******** 2026-04-09 07:27:36.082411 | orchestrator | =============================================================================== 2026-04-09 07:27:36.082422 | orchestrator | octavia : Restart octavia-api container -------------------------------- 18.90s 2026-04-09 07:27:36.082432 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 18.70s 2026-04-09 07:27:36.082443 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 14.57s 2026-04-09 07:27:36.082454 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 13.84s 2026-04-09 07:27:36.082464 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 13.42s 2026-04-09 07:27:36.082475 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 13.31s 2026-04-09 07:27:36.082486 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 12.91s 2026-04-09 07:27:36.082496 | orchestrator | octavia : Get security groups for octavia ------------------------------- 8.14s 2026-04-09 07:27:36.082507 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 6.56s 2026-04-09 07:27:36.082518 | orchestrator | service-check-containers : octavia | Check containers ------------------- 6.29s 2026-04-09 07:27:36.082528 | orchestrator | octavia : Copying over config.json files for services ------------------- 6.21s 2026-04-09 07:27:36.082539 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 6.10s 2026-04-09 07:27:36.082549 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 6.01s 2026-04-09 07:27:36.082560 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.86s 2026-04-09 07:27:36.082589 | orchestrator | octavia : Get amphora flavor info --------------------------------------- 5.38s 2026-04-09 07:27:36.082600 | orchestrator | octavia : Get loadbalancer management network --------------------------- 4.55s 2026-04-09 07:27:36.082611 | orchestrator | octavia : Get service project id ---------------------------------------- 4.32s 2026-04-09 07:27:36.082622 | orchestrator | octavia : Ensuring config directories exist ----------------------------- 3.77s 2026-04-09 07:27:36.082632 | orchestrator | octavia : include_tasks ------------------------------------------------- 3.28s 2026-04-09 07:27:36.082643 | orchestrator | octavia : Copying over Octavia SSH key ---------------------------------- 2.73s 2026-04-09 07:27:36.271942 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-09 07:27:36.272036 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/310-openstack-extended.sh 2026-04-09 07:27:37.564974 | orchestrator | 2026-04-09 07:27:37 | INFO  | Prepare task for execution of gnocchi. 2026-04-09 07:27:37.632945 | orchestrator | 2026-04-09 07:27:37 | INFO  | Task e8adc82d-5429-4e07-8fd9-35e7aa0b9183 (gnocchi) was prepared for execution. 2026-04-09 07:27:37.633040 | orchestrator | 2026-04-09 07:27:37 | INFO  | It takes a moment until task e8adc82d-5429-4e07-8fd9-35e7aa0b9183 (gnocchi) has been started and output is visible here. 2026-04-09 07:27:49.679342 | orchestrator | 2026-04-09 07:27:49.679450 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 07:27:49.679467 | orchestrator | 2026-04-09 07:27:49.679479 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 07:27:49.679489 | orchestrator | Thursday 09 April 2026 07:27:42 +0000 (0:00:01.630) 0:00:01.630 ******** 2026-04-09 07:27:49.679499 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:27:49.679509 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:27:49.679520 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:27:49.679530 | orchestrator | 2026-04-09 07:27:49.679540 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 07:27:49.679550 | orchestrator | Thursday 09 April 2026 07:27:44 +0000 (0:00:01.786) 0:00:03.417 ******** 2026-04-09 07:27:49.679584 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-04-09 07:27:49.679595 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-04-09 07:27:49.679618 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-04-09 07:27:49.679629 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-04-09 07:27:49.679639 | orchestrator | 2026-04-09 07:27:49.679649 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-04-09 07:27:49.679659 | orchestrator | skipping: no hosts matched 2026-04-09 07:27:49.679670 | orchestrator | 2026-04-09 07:27:49.679680 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 07:27:49.679690 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 07:27:49.679702 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 07:27:49.679711 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 07:27:49.679721 | orchestrator | 2026-04-09 07:27:49.679731 | orchestrator | 2026-04-09 07:27:49.679741 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 07:27:49.679751 | orchestrator | Thursday 09 April 2026 07:27:49 +0000 (0:00:04.978) 0:00:08.395 ******** 2026-04-09 07:27:49.679760 | orchestrator | =============================================================================== 2026-04-09 07:27:49.679770 | orchestrator | Group hosts based on enabled services ----------------------------------- 4.98s 2026-04-09 07:27:49.679782 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.79s 2026-04-09 07:27:51.243774 | orchestrator | 2026-04-09 07:27:51 | INFO  | Prepare task for execution of manila. 2026-04-09 07:27:51.312046 | orchestrator | 2026-04-09 07:27:51 | INFO  | Task f4a122e1-15e4-4eb5-8df2-615fb2d25908 (manila) was prepared for execution. 2026-04-09 07:27:51.312144 | orchestrator | 2026-04-09 07:27:51 | INFO  | It takes a moment until task f4a122e1-15e4-4eb5-8df2-615fb2d25908 (manila) has been started and output is visible here. 2026-04-09 07:28:05.378066 | orchestrator | 2026-04-09 07:28:05.378161 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 07:28:05.378214 | orchestrator | 2026-04-09 07:28:05.378222 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 07:28:05.378229 | orchestrator | Thursday 09 April 2026 07:27:56 +0000 (0:00:01.895) 0:00:01.895 ******** 2026-04-09 07:28:05.378236 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:28:05.378245 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:28:05.378252 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:28:05.378258 | orchestrator | 2026-04-09 07:28:05.378265 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 07:28:05.378272 | orchestrator | Thursday 09 April 2026 07:27:58 +0000 (0:00:01.764) 0:00:03.660 ******** 2026-04-09 07:28:05.378278 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-04-09 07:28:05.378285 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-04-09 07:28:05.378292 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-04-09 07:28:05.378299 | orchestrator | 2026-04-09 07:28:05.378305 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-04-09 07:28:05.378311 | orchestrator | 2026-04-09 07:28:05.378318 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-09 07:28:05.378324 | orchestrator | Thursday 09 April 2026 07:28:00 +0000 (0:00:02.103) 0:00:05.763 ******** 2026-04-09 07:28:05.378331 | orchestrator | included: /ansible/roles/manila/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 07:28:05.378339 | orchestrator | 2026-04-09 07:28:05.378345 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-04-09 07:28:05.378372 | orchestrator | Thursday 09 April 2026 07:28:02 +0000 (0:00:02.509) 0:00:08.273 ******** 2026-04-09 07:28:05.378382 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:28:05.378404 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:28:05.378412 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:28:05.378433 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:05.378440 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:05.378451 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:05.378460 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:05.378468 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:05.378474 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:05.378487 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:23.311508 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:23.311626 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:23.311639 | orchestrator | 2026-04-09 07:28:23.311649 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-09 07:28:23.311659 | orchestrator | Thursday 09 April 2026 07:28:06 +0000 (0:00:03.834) 0:00:12.108 ******** 2026-04-09 07:28:23.311668 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 07:28:23.311677 | orchestrator | 2026-04-09 07:28:23.311685 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-04-09 07:28:23.311693 | orchestrator | Thursday 09 April 2026 07:28:08 +0000 (0:00:01.853) 0:00:13.961 ******** 2026-04-09 07:28:23.311701 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:28:23.311710 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:28:23.311718 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:28:23.311726 | orchestrator | 2026-04-09 07:28:23.311735 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-04-09 07:28:23.311743 | orchestrator | Thursday 09 April 2026 07:28:10 +0000 (0:00:02.110) 0:00:16.071 ******** 2026-04-09 07:28:23.311765 | orchestrator | ok: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-09 07:28:23.311775 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-09 07:28:23.311784 | orchestrator | ok: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-09 07:28:23.311792 | orchestrator | ok: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-09 07:28:23.311800 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-09 07:28:23.311808 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-09 07:28:23.311816 | orchestrator | 2026-04-09 07:28:23.311825 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-04-09 07:28:23.311833 | orchestrator | Thursday 09 April 2026 07:28:13 +0000 (0:00:02.468) 0:00:18.539 ******** 2026-04-09 07:28:23.311842 | orchestrator | ok: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-09 07:28:23.311850 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-09 07:28:23.311859 | orchestrator | ok: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-09 07:28:23.311873 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-09 07:28:23.311895 | orchestrator | ok: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-09 07:28:23.311903 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-09 07:28:23.311912 | orchestrator | 2026-04-09 07:28:23.311920 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-04-09 07:28:23.311928 | orchestrator | Thursday 09 April 2026 07:28:15 +0000 (0:00:02.297) 0:00:20.837 ******** 2026-04-09 07:28:23.311937 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-04-09 07:28:23.311945 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-04-09 07:28:23.311953 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-04-09 07:28:23.311961 | orchestrator | 2026-04-09 07:28:23.311969 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-04-09 07:28:23.311977 | orchestrator | Thursday 09 April 2026 07:28:17 +0000 (0:00:01.920) 0:00:22.758 ******** 2026-04-09 07:28:23.311986 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:28:23.311995 | orchestrator | 2026-04-09 07:28:23.312003 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-04-09 07:28:23.312011 | orchestrator | Thursday 09 April 2026 07:28:18 +0000 (0:00:01.143) 0:00:23.901 ******** 2026-04-09 07:28:23.312019 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:28:23.312027 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:28:23.312035 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:28:23.312043 | orchestrator | 2026-04-09 07:28:23.312052 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-09 07:28:23.312060 | orchestrator | Thursday 09 April 2026 07:28:19 +0000 (0:00:01.351) 0:00:25.253 ******** 2026-04-09 07:28:23.312070 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 07:28:23.312080 | orchestrator | 2026-04-09 07:28:23.312090 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-04-09 07:28:23.312099 | orchestrator | Thursday 09 April 2026 07:28:21 +0000 (0:00:01.953) 0:00:27.206 ******** 2026-04-09 07:28:23.312115 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:28:23.312128 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:28:23.312151 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:28:27.434342 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:27.434457 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:27.434490 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:27.434504 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:27.434537 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:27.434549 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:27.434580 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:27.434593 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:27.434605 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:27.434617 | orchestrator | 2026-04-09 07:28:27.434635 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-04-09 07:28:27.434653 | orchestrator | Thursday 09 April 2026 07:28:26 +0000 (0:00:04.932) 0:00:32.138 ******** 2026-04-09 07:28:27.434667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:28:27.434688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:28:27.434709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:28:29.554741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:28:29.554877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 07:28:29.554896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:28:29.554929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 07:28:29.554941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:28:29.554953 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:28:29.554982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 07:28:29.554994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 07:28:29.555010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 07:28:29.555028 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:28:29.555039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 07:28:29.555049 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:28:29.555059 | orchestrator | 2026-04-09 07:28:29.555070 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-04-09 07:28:29.555082 | orchestrator | Thursday 09 April 2026 07:28:28 +0000 (0:00:02.130) 0:00:34.269 ******** 2026-04-09 07:28:29.555093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:28:29.555103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:28:29.555122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:28:33.037624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 07:28:33.037792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:28:33.037823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 07:28:33.037844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 07:28:33.037858 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:28:33.037872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:28:33.037908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:28:33.037936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 07:28:33.037970 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:28:33.037982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 07:28:33.037994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 07:28:33.038006 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:28:33.038075 | orchestrator | 2026-04-09 07:28:33.038089 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-04-09 07:28:33.038102 | orchestrator | Thursday 09 April 2026 07:28:31 +0000 (0:00:02.563) 0:00:36.833 ******** 2026-04-09 07:28:33.038115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:28:33.038164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:28:39.406309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:28:39.406428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:39.406445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:39.406458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:39.406471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:39.406502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:39.406541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:39.406554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:39.406567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:39.406578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:39.406590 | orchestrator | 2026-04-09 07:28:39.406603 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-04-09 07:28:39.406616 | orchestrator | Thursday 09 April 2026 07:28:36 +0000 (0:00:05.366) 0:00:42.200 ******** 2026-04-09 07:28:39.406629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:28:39.406662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:28:49.817249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:28:49.817386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:49.817415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 07:28:49.817435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:49.817479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 07:28:49.817522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:49.817534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 07:28:49.817545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:49.817556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:49.817567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:49.817584 | orchestrator | 2026-04-09 07:28:49.817596 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-04-09 07:28:49.817608 | orchestrator | Thursday 09 April 2026 07:28:44 +0000 (0:00:07.611) 0:00:49.811 ******** 2026-04-09 07:28:49.817618 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-04-09 07:28:49.817628 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-04-09 07:28:49.817638 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-04-09 07:28:49.817648 | orchestrator | 2026-04-09 07:28:49.817657 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-04-09 07:28:49.817667 | orchestrator | Thursday 09 April 2026 07:28:49 +0000 (0:00:04.713) 0:00:54.524 ******** 2026-04-09 07:28:49.817689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:28:52.927582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:28:52.927713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 07:28:52.927739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 07:28:52.927798 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:28:52.927822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:28:52.927842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:28:52.927890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 07:28:52.927904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 07:28:52.927915 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:28:52.927927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:28:52.927948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:28:52.927959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 07:28:52.927976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 07:28:52.927987 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:28:52.927999 | orchestrator | 2026-04-09 07:28:52.928011 | orchestrator | TASK [service-check-containers : manila | Check containers] ******************** 2026-04-09 07:28:52.928024 | orchestrator | Thursday 09 April 2026 07:28:51 +0000 (0:00:02.316) 0:00:56.841 ******** 2026-04-09 07:28:52.928044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:28:57.028440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:28:57.028569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:28:57.028590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:57.028614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:57.028624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:57.028653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:57.028663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:57.028680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:57.028688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:57.028702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:57.028710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-09 07:28:57.028718 | orchestrator | 2026-04-09 07:28:57.028727 | orchestrator | TASK [service-check-containers : manila | Notify handlers to restart containers] *** 2026-04-09 07:28:57.028737 | orchestrator | Thursday 09 April 2026 07:28:56 +0000 (0:00:05.159) 0:01:02.001 ******** 2026-04-09 07:28:57.028746 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 07:28:57.028755 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:28:57.028764 | orchestrator | } 2026-04-09 07:28:57.028772 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 07:28:57.028780 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:28:57.028787 | orchestrator | } 2026-04-09 07:28:57.028795 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 07:28:57.028810 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:28:58.903017 | orchestrator | } 2026-04-09 07:28:58.903215 | orchestrator | 2026-04-09 07:28:58.903247 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 07:28:58.903269 | orchestrator | Thursday 09 April 2026 07:28:58 +0000 (0:00:01.385) 0:01:03.386 ******** 2026-04-09 07:28:58.903290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:28:58.903306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:28:58.903320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 07:28:58.903350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 07:28:58.903362 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:28:58.903396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:28:58.903431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:28:58.903443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 07:28:58.903456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 07:28:58.903467 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:28:58.903484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:28:58.903496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 07:28:58.903516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 07:32:29.673893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 07:32:29.673976 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:32:29.673984 | orchestrator | 2026-04-09 07:32:29.673989 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-04-09 07:32:29.673994 | orchestrator | Thursday 09 April 2026 07:29:00 +0000 (0:00:02.442) 0:01:05.829 ******** 2026-04-09 07:32:29.673998 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:32:29.674002 | orchestrator | 2026-04-09 07:32:29.674006 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-09 07:32:29.674010 | orchestrator | Thursday 09 April 2026 07:29:21 +0000 (0:00:21.032) 0:01:26.862 ******** 2026-04-09 07:32:29.674046 | orchestrator | 2026-04-09 07:32:29.674051 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-09 07:32:29.674055 | orchestrator | Thursday 09 April 2026 07:29:22 +0000 (0:00:00.488) 0:01:27.350 ******** 2026-04-09 07:32:29.674059 | orchestrator | 2026-04-09 07:32:29.674062 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-09 07:32:29.674067 | orchestrator | Thursday 09 April 2026 07:29:22 +0000 (0:00:00.453) 0:01:27.804 ******** 2026-04-09 07:32:29.674070 | orchestrator | 2026-04-09 07:32:29.674074 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-04-09 07:32:29.674078 | orchestrator | Thursday 09 April 2026 07:29:23 +0000 (0:00:00.834) 0:01:28.638 ******** 2026-04-09 07:32:29.674082 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:32:29.674086 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:32:29.674090 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:32:29.674094 | orchestrator | 2026-04-09 07:32:29.674098 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-04-09 07:32:29.674102 | orchestrator | Thursday 09 April 2026 07:29:40 +0000 (0:00:17.594) 0:01:46.233 ******** 2026-04-09 07:32:29.674106 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:32:29.674110 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:32:29.674114 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:32:29.674118 | orchestrator | 2026-04-09 07:32:29.674122 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-04-09 07:32:29.674126 | orchestrator | Thursday 09 April 2026 07:29:54 +0000 (0:00:13.470) 0:01:59.703 ******** 2026-04-09 07:32:29.674130 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:32:29.674134 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:32:29.674138 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:32:29.674142 | orchestrator | 2026-04-09 07:32:29.674146 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-04-09 07:32:29.674150 | orchestrator | Thursday 09 April 2026 07:30:07 +0000 (0:00:13.423) 0:02:13.127 ******** 2026-04-09 07:32:29.674168 | orchestrator | 2026-04-09 07:32:29.674172 | orchestrator | STILL ALIVE [task 'manila : Restart manila-share container' is running] ******** 2026-04-09 07:32:29.674176 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:32:29.674180 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:32:29.674184 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:32:29.674188 | orchestrator | 2026-04-09 07:32:29.674192 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 07:32:29.674206 | orchestrator | testbed-node-0 : ok=21  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 07:32:29.674211 | orchestrator | testbed-node-1 : ok=20  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 07:32:29.674215 | orchestrator | testbed-node-2 : ok=20  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 07:32:29.674219 | orchestrator | 2026-04-09 07:32:29.674223 | orchestrator | 2026-04-09 07:32:29.674227 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 07:32:29.674230 | orchestrator | Thursday 09 April 2026 07:32:29 +0000 (0:02:21.344) 0:04:34.472 ******** 2026-04-09 07:32:29.674234 | orchestrator | =============================================================================== 2026-04-09 07:32:29.674238 | orchestrator | manila : Restart manila-share container ------------------------------- 141.34s 2026-04-09 07:32:29.674242 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 21.03s 2026-04-09 07:32:29.674246 | orchestrator | manila : Restart manila-api container ---------------------------------- 17.59s 2026-04-09 07:32:29.674249 | orchestrator | manila : Restart manila-data container --------------------------------- 13.47s 2026-04-09 07:32:29.674253 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 13.42s 2026-04-09 07:32:29.674257 | orchestrator | manila : Copying over manila.conf --------------------------------------- 7.61s 2026-04-09 07:32:29.674261 | orchestrator | manila : Copying over config.json files for services -------------------- 5.37s 2026-04-09 07:32:29.674265 | orchestrator | service-check-containers : manila | Check containers -------------------- 5.16s 2026-04-09 07:32:29.674268 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 4.93s 2026-04-09 07:32:29.674281 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 4.71s 2026-04-09 07:32:29.674285 | orchestrator | manila : Ensuring config directories exist ------------------------------ 3.83s 2026-04-09 07:32:29.674289 | orchestrator | service-cert-copy : manila | Copying over backend internal TLS key ------ 2.56s 2026-04-09 07:32:29.674293 | orchestrator | manila : include_tasks -------------------------------------------------- 2.51s 2026-04-09 07:32:29.674297 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 2.47s 2026-04-09 07:32:29.674300 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.44s 2026-04-09 07:32:29.674304 | orchestrator | manila : Copying over existing policy file ------------------------------ 2.32s 2026-04-09 07:32:29.674308 | orchestrator | manila : Copy over ceph Manila keyrings --------------------------------- 2.30s 2026-04-09 07:32:29.674312 | orchestrator | service-cert-copy : manila | Copying over backend internal TLS certificate --- 2.13s 2026-04-09 07:32:29.674316 | orchestrator | manila : Ensuring manila service ceph config subdir exists -------------- 2.11s 2026-04-09 07:32:29.674320 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.10s 2026-04-09 07:32:29.857002 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-09 07:32:29.857124 | orchestrator | + osism migrate rabbitmq3to4 delete 2026-04-09 07:32:36.275879 | orchestrator | 2026-04-09 07:32:36 | ERROR  | Unable to get ansible vault password 2026-04-09 07:32:36.275984 | orchestrator | 2026-04-09 07:32:36 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 07:32:36.276000 | orchestrator | 2026-04-09 07:32:36 | ERROR  | Dropping encrypted entries 2026-04-09 07:32:36.310800 | orchestrator | 2026-04-09 07:32:36 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-09 07:32:36.583497 | orchestrator | 2026-04-09 07:32:36 | INFO  | Found 128 classic queue(s) in vhost '/' 2026-04-09 07:32:36.630718 | orchestrator | 2026-04-09 07:32:36 | INFO  | Deleted queue: alarm.all.sample 2026-04-09 07:32:36.690261 | orchestrator | 2026-04-09 07:32:36 | INFO  | Deleted queue: alarming.sample 2026-04-09 07:32:36.737919 | orchestrator | 2026-04-09 07:32:36 | INFO  | Deleted queue: barbican.workers 2026-04-09 07:32:36.796471 | orchestrator | 2026-04-09 07:32:36 | INFO  | Deleted queue: barbican.workers.barbican.queue 2026-04-09 07:32:36.831251 | orchestrator | 2026-04-09 07:32:36 | INFO  | Deleted queue: barbican.workers_fanout_450cce1ac9394a10b3fb873c58725b8a 2026-04-09 07:32:36.870770 | orchestrator | 2026-04-09 07:32:36 | INFO  | Deleted queue: barbican.workers_fanout_9f06d98685624c69b3ac95e6a3ff9bac 2026-04-09 07:32:36.913917 | orchestrator | 2026-04-09 07:32:36 | INFO  | Deleted queue: barbican.workers_fanout_cf4ef050da0a4cd39d96b1ee21f2894c 2026-04-09 07:32:36.962251 | orchestrator | 2026-04-09 07:32:36 | INFO  | Deleted queue: barbican_notifications.info 2026-04-09 07:32:37.014747 | orchestrator | 2026-04-09 07:32:37 | INFO  | Deleted queue: central 2026-04-09 07:32:37.064364 | orchestrator | 2026-04-09 07:32:37 | INFO  | Deleted queue: central.testbed-node-0 2026-04-09 07:32:37.126589 | orchestrator | 2026-04-09 07:32:37 | INFO  | Deleted queue: central.testbed-node-1 2026-04-09 07:32:37.188928 | orchestrator | 2026-04-09 07:32:37 | INFO  | Deleted queue: central.testbed-node-2 2026-04-09 07:32:37.234638 | orchestrator | 2026-04-09 07:32:37 | INFO  | Deleted queue: central_fanout_245d90ee3aac47fb88e60e273cf24ccc 2026-04-09 07:32:37.275308 | orchestrator | 2026-04-09 07:32:37 | INFO  | Deleted queue: central_fanout_51f5f64e6cd247779ea68d562f1c6b4e 2026-04-09 07:32:37.321007 | orchestrator | 2026-04-09 07:32:37 | INFO  | Deleted queue: central_fanout_60497334245b4f479d9bbd451602334b 2026-04-09 07:32:37.361875 | orchestrator | 2026-04-09 07:32:37 | INFO  | Deleted queue: central_fanout_66d2e85d806846d9b9a772534d26a3fa 2026-04-09 07:32:37.401796 | orchestrator | 2026-04-09 07:32:37 | INFO  | Deleted queue: central_fanout_9fa98b2c19f44c4093942735d6476a52 2026-04-09 07:32:37.449321 | orchestrator | 2026-04-09 07:32:37 | INFO  | Deleted queue: central_fanout_b9a0a81986d04c31b7105e6d6a098f98 2026-04-09 07:32:37.498876 | orchestrator | 2026-04-09 07:32:37 | INFO  | Deleted queue: cinder-backup 2026-04-09 07:32:37.545744 | orchestrator | 2026-04-09 07:32:37 | INFO  | Deleted queue: cinder-backup.testbed-node-0 2026-04-09 07:32:37.587848 | orchestrator | 2026-04-09 07:32:37 | INFO  | Deleted queue: cinder-backup.testbed-node-1 2026-04-09 07:32:37.640262 | orchestrator | 2026-04-09 07:32:37 | INFO  | Deleted queue: cinder-backup.testbed-node-2 2026-04-09 07:32:37.691158 | orchestrator | 2026-04-09 07:32:37 | INFO  | Deleted queue: cinder-scheduler 2026-04-09 07:32:37.775536 | orchestrator | 2026-04-09 07:32:37 | INFO  | Deleted queue: cinder-scheduler.testbed-node-0 2026-04-09 07:32:37.821650 | orchestrator | 2026-04-09 07:32:37 | INFO  | Deleted queue: cinder-scheduler.testbed-node-1 2026-04-09 07:32:37.869013 | orchestrator | 2026-04-09 07:32:37 | INFO  | Deleted queue: cinder-scheduler.testbed-node-2 2026-04-09 07:32:37.911036 | orchestrator | 2026-04-09 07:32:37 | INFO  | Deleted queue: cinder-volume 2026-04-09 07:32:37.965217 | orchestrator | 2026-04-09 07:32:37 | INFO  | Deleted queue: cinder-volume.testbed-node-0@rbd-volumes 2026-04-09 07:32:38.023127 | orchestrator | 2026-04-09 07:32:38 | INFO  | Deleted queue: cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 2026-04-09 07:32:38.069080 | orchestrator | 2026-04-09 07:32:38 | INFO  | Deleted queue: cinder-volume.testbed-node-1@rbd-volumes 2026-04-09 07:32:38.109986 | orchestrator | 2026-04-09 07:32:38 | INFO  | Deleted queue: cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 2026-04-09 07:32:38.151490 | orchestrator | 2026-04-09 07:32:38 | INFO  | Deleted queue: cinder-volume.testbed-node-2@rbd-volumes 2026-04-09 07:32:38.213056 | orchestrator | 2026-04-09 07:32:38 | INFO  | Deleted queue: cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 2026-04-09 07:32:38.260483 | orchestrator | 2026-04-09 07:32:38 | INFO  | Deleted queue: compute 2026-04-09 07:32:38.313832 | orchestrator | 2026-04-09 07:32:38 | INFO  | Deleted queue: compute.testbed-node-3 2026-04-09 07:32:38.361021 | orchestrator | 2026-04-09 07:32:38 | INFO  | Deleted queue: compute.testbed-node-4 2026-04-09 07:32:38.414448 | orchestrator | 2026-04-09 07:32:38 | INFO  | Deleted queue: compute.testbed-node-5 2026-04-09 07:32:38.458318 | orchestrator | 2026-04-09 07:32:38 | INFO  | Deleted queue: conductor 2026-04-09 07:32:38.506296 | orchestrator | 2026-04-09 07:32:38 | INFO  | Deleted queue: conductor.testbed-node-0 2026-04-09 07:32:38.553429 | orchestrator | 2026-04-09 07:32:38 | INFO  | Deleted queue: conductor.testbed-node-1 2026-04-09 07:32:38.606798 | orchestrator | 2026-04-09 07:32:38 | INFO  | Deleted queue: conductor.testbed-node-2 2026-04-09 07:32:38.667024 | orchestrator | 2026-04-09 07:32:38 | INFO  | Deleted queue: event.sample 2026-04-09 07:32:38.695862 | orchestrator | 2026-04-09 07:32:38 | INFO  | Closed connection: 192.168.16.10:40800 -> 192.168.16.10:5672 2026-04-09 07:32:38.712824 | orchestrator | 2026-04-09 07:32:38 | INFO  | Closed connection: 192.168.16.11:47446 -> 192.168.16.11:5672 2026-04-09 07:32:38.728934 | orchestrator | 2026-04-09 07:32:38 | INFO  | Closed connection: 192.168.16.10:40922 -> 192.168.16.10:5672 2026-04-09 07:32:38.746626 | orchestrator | 2026-04-09 07:32:38 | INFO  | Closed connection: 192.168.16.12:38676 -> 192.168.16.11:5672 2026-04-09 07:32:38.770609 | orchestrator | 2026-04-09 07:32:38 | INFO  | Closed connection: 192.168.16.12:38664 -> 192.168.16.11:5672 2026-04-09 07:32:38.787268 | orchestrator | 2026-04-09 07:32:38 | INFO  | Closed connection: 192.168.16.11:60468 -> 192.168.16.10:5672 2026-04-09 07:32:38.802284 | orchestrator | 2026-04-09 07:32:38 | INFO  | Closed connection: 192.168.16.11:38670 -> 192.168.16.10:5672 2026-04-09 07:32:38.820240 | orchestrator | 2026-04-09 07:32:38 | INFO  | Closed connection: 192.168.16.10:48766 -> 192.168.16.10:5672 2026-04-09 07:32:38.837286 | orchestrator | 2026-04-09 07:32:38 | INFO  | Closed connection: 192.168.16.12:42418 -> 192.168.16.10:5672 2026-04-09 07:32:38.837399 | orchestrator | 2026-04-09 07:32:38 | INFO  | Closed 9 connection(s) for queue: magnum-conductor 2026-04-09 07:32:38.866231 | orchestrator | 2026-04-09 07:32:38 | INFO  | Deleted queue: magnum-conductor 2026-04-09 07:32:38.903599 | orchestrator | 2026-04-09 07:32:38 | INFO  | Deleted queue: magnum-conductor.cykcdelam52s 2026-04-09 07:32:38.950750 | orchestrator | 2026-04-09 07:32:38 | INFO  | Deleted queue: magnum-conductor.hbzgkxncdqav 2026-04-09 07:32:38.993115 | orchestrator | 2026-04-09 07:32:38 | INFO  | Deleted queue: magnum-conductor.mxbpe6z4rg4r 2026-04-09 07:32:39.047192 | orchestrator | 2026-04-09 07:32:39 | INFO  | Deleted queue: magnum-conductor_fanout_0c1eb08ade4b4112811cd116bdd6b4c2 2026-04-09 07:32:39.089491 | orchestrator | 2026-04-09 07:32:39 | INFO  | Deleted queue: magnum-conductor_fanout_3aca7ac949d248549306a81b8e7fed9b 2026-04-09 07:32:39.129793 | orchestrator | 2026-04-09 07:32:39 | INFO  | Deleted queue: magnum-conductor_fanout_532b16f5be3c4962b53ddf24cfa6d53a 2026-04-09 07:32:39.165584 | orchestrator | 2026-04-09 07:32:39 | INFO  | Deleted queue: magnum-conductor_fanout_55d3ec5dcdb14ef0b29dbe13439f2cf7 2026-04-09 07:32:39.200797 | orchestrator | 2026-04-09 07:32:39 | INFO  | Deleted queue: magnum-conductor_fanout_57ba0816dc4e47c4a46e9bc9d8af3ce6 2026-04-09 07:32:39.229615 | orchestrator | 2026-04-09 07:32:39 | INFO  | Deleted queue: magnum-conductor_fanout_73cd068eca4a46d68c5e268ecfbceaed 2026-04-09 07:32:39.260929 | orchestrator | 2026-04-09 07:32:39 | INFO  | Deleted queue: magnum-conductor_fanout_7e57901f145d443bb977ac07fcc7ff75 2026-04-09 07:32:39.295464 | orchestrator | 2026-04-09 07:32:39 | INFO  | Deleted queue: magnum-conductor_fanout_ad329c41f85d47b481d20dd1d3221641 2026-04-09 07:32:39.334319 | orchestrator | 2026-04-09 07:32:39 | INFO  | Deleted queue: magnum-conductor_fanout_c4d60ac9a043435f84fe9f5e8e3fbe0b 2026-04-09 07:32:39.381136 | orchestrator | 2026-04-09 07:32:39 | INFO  | Deleted queue: manila-data 2026-04-09 07:32:39.429907 | orchestrator | 2026-04-09 07:32:39 | INFO  | Deleted queue: manila-data.testbed-node-0 2026-04-09 07:32:39.482686 | orchestrator | 2026-04-09 07:32:39 | INFO  | Deleted queue: manila-data.testbed-node-1 2026-04-09 07:32:39.539914 | orchestrator | 2026-04-09 07:32:39 | INFO  | Deleted queue: manila-data.testbed-node-2 2026-04-09 07:32:39.584805 | orchestrator | 2026-04-09 07:32:39 | INFO  | Deleted queue: manila-scheduler 2026-04-09 07:32:39.630405 | orchestrator | 2026-04-09 07:32:39 | INFO  | Deleted queue: manila-scheduler.testbed-node-0 2026-04-09 07:32:39.676950 | orchestrator | 2026-04-09 07:32:39 | INFO  | Deleted queue: manila-scheduler.testbed-node-1 2026-04-09 07:32:39.728933 | orchestrator | 2026-04-09 07:32:39 | INFO  | Deleted queue: manila-scheduler.testbed-node-2 2026-04-09 07:32:39.775852 | orchestrator | 2026-04-09 07:32:39 | INFO  | Deleted queue: manila-share 2026-04-09 07:32:39.821133 | orchestrator | 2026-04-09 07:32:39 | INFO  | Deleted queue: manila-share.testbed-node-0@cephfsnative1 2026-04-09 07:32:39.857328 | orchestrator | 2026-04-09 07:32:39 | INFO  | Deleted queue: manila-share.testbed-node-1@cephfsnative1 2026-04-09 07:32:39.916863 | orchestrator | 2026-04-09 07:32:39 | INFO  | Deleted queue: manila-share.testbed-node-2@cephfsnative1 2026-04-09 07:32:39.958341 | orchestrator | 2026-04-09 07:32:39 | INFO  | Deleted queue: manila-share_fanout_2c8f381f8d39447bb5cc9f0fe677e59a 2026-04-09 07:32:40.004549 | orchestrator | 2026-04-09 07:32:40 | INFO  | Deleted queue: manila-share_fanout_49b4befada5445c1a2b47ab1fb66d340 2026-04-09 07:32:40.064736 | orchestrator | 2026-04-09 07:32:40 | INFO  | Deleted queue: manila-share_fanout_4ce3ecb188ff4840bcbbd1bcea6e7843 2026-04-09 07:32:40.253278 | orchestrator | 2026-04-09 07:32:40 | INFO  | Deleted queue: notifications.audit 2026-04-09 07:32:40.423078 | orchestrator | 2026-04-09 07:32:40 | INFO  | Deleted queue: notifications.critical 2026-04-09 07:32:40.567444 | orchestrator | 2026-04-09 07:32:40 | INFO  | Deleted queue: notifications.debug 2026-04-09 07:32:40.699318 | orchestrator | 2026-04-09 07:32:40 | INFO  | Deleted queue: notifications.error 2026-04-09 07:32:40.856102 | orchestrator | 2026-04-09 07:32:40 | INFO  | Deleted queue: notifications.info 2026-04-09 07:32:41.029590 | orchestrator | 2026-04-09 07:32:41 | INFO  | Deleted queue: notifications.sample 2026-04-09 07:32:41.180851 | orchestrator | 2026-04-09 07:32:41 | INFO  | Deleted queue: notifications.warn 2026-04-09 07:32:41.215838 | orchestrator | 2026-04-09 07:32:41 | INFO  | Deleted queue: octavia_provisioning_v2 2026-04-09 07:32:41.262297 | orchestrator | 2026-04-09 07:32:41 | INFO  | Deleted queue: octavia_provisioning_v2.testbed-node-0 2026-04-09 07:32:41.307370 | orchestrator | 2026-04-09 07:32:41 | INFO  | Deleted queue: octavia_provisioning_v2.testbed-node-1 2026-04-09 07:32:41.359552 | orchestrator | 2026-04-09 07:32:41 | INFO  | Deleted queue: octavia_provisioning_v2.testbed-node-2 2026-04-09 07:32:41.408528 | orchestrator | 2026-04-09 07:32:41 | INFO  | Deleted queue: producer 2026-04-09 07:32:41.461476 | orchestrator | 2026-04-09 07:32:41 | INFO  | Deleted queue: producer.testbed-node-0 2026-04-09 07:32:41.521577 | orchestrator | 2026-04-09 07:32:41 | INFO  | Deleted queue: producer.testbed-node-1 2026-04-09 07:32:41.593368 | orchestrator | 2026-04-09 07:32:41 | INFO  | Deleted queue: producer.testbed-node-2 2026-04-09 07:32:41.640883 | orchestrator | 2026-04-09 07:32:41 | INFO  | Deleted queue: producer_fanout_16b170b4bcbd4fcabea50df9186efcdd 2026-04-09 07:32:41.684911 | orchestrator | 2026-04-09 07:32:41 | INFO  | Deleted queue: producer_fanout_74f3ddbe9efe4f9cb0d81c6814785534 2026-04-09 07:32:41.718560 | orchestrator | 2026-04-09 07:32:41 | INFO  | Deleted queue: producer_fanout_7a52854e829944629d4f518dbb2a60c1 2026-04-09 07:32:41.769784 | orchestrator | 2026-04-09 07:32:41 | INFO  | Deleted queue: producer_fanout_95309e4829194bc38663576bdd4d211f 2026-04-09 07:32:41.809005 | orchestrator | 2026-04-09 07:32:41 | INFO  | Deleted queue: producer_fanout_be535d34c37f49b6b2bcbc39012e1ee5 2026-04-09 07:32:41.857641 | orchestrator | 2026-04-09 07:32:41 | INFO  | Deleted queue: producer_fanout_ef1d5347034040e2929196cb401e4bd3 2026-04-09 07:32:41.903585 | orchestrator | 2026-04-09 07:32:41 | INFO  | Deleted queue: q-plugin 2026-04-09 07:32:41.952670 | orchestrator | 2026-04-09 07:32:41 | INFO  | Deleted queue: q-plugin.testbed-node-0 2026-04-09 07:32:42.014162 | orchestrator | 2026-04-09 07:32:42 | INFO  | Deleted queue: q-plugin.testbed-node-1 2026-04-09 07:32:42.064241 | orchestrator | 2026-04-09 07:32:42 | INFO  | Deleted queue: q-plugin.testbed-node-2 2026-04-09 07:32:42.102592 | orchestrator | 2026-04-09 07:32:42 | INFO  | Deleted queue: q-reports-plugin 2026-04-09 07:32:42.151822 | orchestrator | 2026-04-09 07:32:42 | INFO  | Deleted queue: q-reports-plugin.testbed-node-0 2026-04-09 07:32:42.193206 | orchestrator | 2026-04-09 07:32:42 | INFO  | Deleted queue: q-reports-plugin.testbed-node-1 2026-04-09 07:32:42.241811 | orchestrator | 2026-04-09 07:32:42 | INFO  | Deleted queue: q-reports-plugin.testbed-node-2 2026-04-09 07:32:42.280774 | orchestrator | 2026-04-09 07:32:42 | INFO  | Deleted queue: q-server-resource-versions 2026-04-09 07:32:42.322998 | orchestrator | 2026-04-09 07:32:42 | INFO  | Deleted queue: q-server-resource-versions.testbed-node-0 2026-04-09 07:32:42.366945 | orchestrator | 2026-04-09 07:32:42 | INFO  | Deleted queue: q-server-resource-versions.testbed-node-1 2026-04-09 07:32:42.429850 | orchestrator | 2026-04-09 07:32:42 | INFO  | Deleted queue: q-server-resource-versions.testbed-node-2 2026-04-09 07:32:42.471778 | orchestrator | 2026-04-09 07:32:42 | INFO  | Deleted queue: reply_00d8c011bbdf4bbe9e474c97726d8d52 2026-04-09 07:32:42.515723 | orchestrator | 2026-04-09 07:32:42 | INFO  | Deleted queue: reply_0733286356e0483d8de9c3a404284dfb 2026-04-09 07:32:42.555826 | orchestrator | 2026-04-09 07:32:42 | INFO  | Deleted queue: reply_175b9f271ceb436ea3b2e921de89e81a 2026-04-09 07:32:42.591759 | orchestrator | 2026-04-09 07:32:42 | INFO  | Deleted queue: reply_200f23232ea444e0b16b1a55777ed79d 2026-04-09 07:32:42.625843 | orchestrator | 2026-04-09 07:32:42 | INFO  | Deleted queue: reply_310560fb822b4d29927e61d924cc1507 2026-04-09 07:32:42.664995 | orchestrator | 2026-04-09 07:32:42 | INFO  | Deleted queue: reply_405e5ed6c0b54ba099d55816ed55667a 2026-04-09 07:32:42.694569 | orchestrator | 2026-04-09 07:32:42 | INFO  | Deleted queue: reply_8578701960c14df283b116bc5001688d 2026-04-09 07:32:42.728717 | orchestrator | 2026-04-09 07:32:42 | INFO  | Deleted queue: reply_a05f08912d3746aab3a95cc5d0d37c46 2026-04-09 07:32:42.762862 | orchestrator | 2026-04-09 07:32:42 | INFO  | Deleted queue: reply_ce976773bf8a43589f7d025bc22bb31b 2026-04-09 07:32:42.802323 | orchestrator | 2026-04-09 07:32:42 | INFO  | Deleted queue: reply_de0203f45a3844b7a98c2b5966ce90d4 2026-04-09 07:32:42.836315 | orchestrator | 2026-04-09 07:32:42 | INFO  | Deleted queue: reply_e172a8be12184f5082687291ea50369e 2026-04-09 07:32:42.877708 | orchestrator | 2026-04-09 07:32:42 | INFO  | Deleted queue: scheduler 2026-04-09 07:32:42.914666 | orchestrator | 2026-04-09 07:32:42 | INFO  | Deleted queue: scheduler.testbed-node-0 2026-04-09 07:32:42.961599 | orchestrator | 2026-04-09 07:32:42 | INFO  | Deleted queue: scheduler.testbed-node-1 2026-04-09 07:32:43.020113 | orchestrator | 2026-04-09 07:32:43 | INFO  | Deleted queue: scheduler.testbed-node-2 2026-04-09 07:32:43.070366 | orchestrator | 2026-04-09 07:32:43 | INFO  | Deleted queue: worker 2026-04-09 07:32:43.119545 | orchestrator | 2026-04-09 07:32:43 | INFO  | Deleted queue: worker.testbed-node-0 2026-04-09 07:32:43.165160 | orchestrator | 2026-04-09 07:32:43 | INFO  | Deleted queue: worker.testbed-node-1 2026-04-09 07:32:43.216783 | orchestrator | 2026-04-09 07:32:43 | INFO  | Deleted queue: worker.testbed-node-2 2026-04-09 07:32:43.257231 | orchestrator | 2026-04-09 07:32:43 | INFO  | Deleted queue: worker_fanout_3ec88b44f21247e0aebe5bd8e331db9b 2026-04-09 07:32:43.290434 | orchestrator | 2026-04-09 07:32:43 | INFO  | Deleted queue: worker_fanout_6167decbe5bf42108b370fb0a6e950c1 2026-04-09 07:32:43.321986 | orchestrator | 2026-04-09 07:32:43 | INFO  | Deleted queue: worker_fanout_6a3f667e83e040aa8d47e26069bf9aa6 2026-04-09 07:32:43.361419 | orchestrator | 2026-04-09 07:32:43 | INFO  | Deleted queue: worker_fanout_bf77586457d04dd782e789ffdb87245f 2026-04-09 07:32:43.413910 | orchestrator | 2026-04-09 07:32:43 | INFO  | Deleted queue: worker_fanout_e41f68f4a07c471a84af1b4767878bcf 2026-04-09 07:32:43.455285 | orchestrator | 2026-04-09 07:32:43 | INFO  | Deleted queue: worker_fanout_fc14f61082574433b0efceb1c87ae4ae 2026-04-09 07:32:43.455359 | orchestrator | 2026-04-09 07:32:43 | INFO  | Successfully deleted 128 queue(s) in vhost '/' 2026-04-09 07:32:43.692352 | orchestrator | + osism migrate rabbitmq3to4 list 2026-04-09 07:32:49.875677 | orchestrator | 2026-04-09 07:32:49 | ERROR  | Unable to get ansible vault password 2026-04-09 07:32:49.875788 | orchestrator | 2026-04-09 07:32:49 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 07:32:49.875945 | orchestrator | 2026-04-09 07:32:49 | ERROR  | Dropping encrypted entries 2026-04-09 07:32:49.909333 | orchestrator | 2026-04-09 07:32:49 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-09 07:32:50.128076 | orchestrator | 2026-04-09 07:32:50 | INFO  | Found 13 classic queue(s) in vhost '/': 2026-04-09 07:32:50.128196 | orchestrator | 2026-04-09 07:32:50 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-04-09 07:32:50.128216 | orchestrator | 2026-04-09 07:32:50 | INFO  |  - magnum-conductor.cykcdelam52s (vhost: /, messages: 0) 2026-04-09 07:32:50.128286 | orchestrator | 2026-04-09 07:32:50 | INFO  |  - magnum-conductor.hbzgkxncdqav (vhost: /, messages: 0) 2026-04-09 07:32:50.128301 | orchestrator | 2026-04-09 07:32:50 | INFO  |  - magnum-conductor.mxbpe6z4rg4r (vhost: /, messages: 0) 2026-04-09 07:32:50.128313 | orchestrator | 2026-04-09 07:32:50 | INFO  |  - magnum-conductor_fanout_0c1eb08ade4b4112811cd116bdd6b4c2 (vhost: /, messages: 0) 2026-04-09 07:32:50.128326 | orchestrator | 2026-04-09 07:32:50 | INFO  |  - magnum-conductor_fanout_3aca7ac949d248549306a81b8e7fed9b (vhost: /, messages: 0) 2026-04-09 07:32:50.128337 | orchestrator | 2026-04-09 07:32:50 | INFO  |  - magnum-conductor_fanout_532b16f5be3c4962b53ddf24cfa6d53a (vhost: /, messages: 0) 2026-04-09 07:32:50.128349 | orchestrator | 2026-04-09 07:32:50 | INFO  |  - magnum-conductor_fanout_55d3ec5dcdb14ef0b29dbe13439f2cf7 (vhost: /, messages: 0) 2026-04-09 07:32:50.128442 | orchestrator | 2026-04-09 07:32:50 | INFO  |  - magnum-conductor_fanout_57ba0816dc4e47c4a46e9bc9d8af3ce6 (vhost: /, messages: 0) 2026-04-09 07:32:50.128457 | orchestrator | 2026-04-09 07:32:50 | INFO  |  - magnum-conductor_fanout_73cd068eca4a46d68c5e268ecfbceaed (vhost: /, messages: 0) 2026-04-09 07:32:50.128469 | orchestrator | 2026-04-09 07:32:50 | INFO  |  - magnum-conductor_fanout_7e57901f145d443bb977ac07fcc7ff75 (vhost: /, messages: 0) 2026-04-09 07:32:50.129165 | orchestrator | 2026-04-09 07:32:50 | INFO  |  - magnum-conductor_fanout_ad329c41f85d47b481d20dd1d3221641 (vhost: /, messages: 0) 2026-04-09 07:32:50.129212 | orchestrator | 2026-04-09 07:32:50 | INFO  |  - magnum-conductor_fanout_c4d60ac9a043435f84fe9f5e8e3fbe0b (vhost: /, messages: 0) 2026-04-09 07:32:50.396722 | orchestrator | + osism migrate rabbitmq3to4 list --vhost openstack --quorum 2026-04-09 07:32:56.688208 | orchestrator | 2026-04-09 07:32:56 | ERROR  | Unable to get ansible vault password 2026-04-09 07:32:56.688339 | orchestrator | 2026-04-09 07:32:56 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 07:32:56.688361 | orchestrator | 2026-04-09 07:32:56 | ERROR  | Dropping encrypted entries 2026-04-09 07:32:56.722951 | orchestrator | 2026-04-09 07:32:56 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-09 07:32:56.919668 | orchestrator | 2026-04-09 07:32:56 | INFO  | Found 192 quorum queue(s) in vhost 'openstack': 2026-04-09 07:32:56.919857 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - alarm.all.sample (vhost: openstack, messages: 0) 2026-04-09 07:32:56.919886 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - alarming.sample (vhost: openstack, messages: 0) 2026-04-09 07:32:56.919907 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - barbican.workers (vhost: openstack, messages: 0) 2026-04-09 07:32:56.920091 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - barbican.workers.barbican.queue (vhost: openstack, messages: 0) 2026-04-09 07:32:56.920116 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - barbican.workers_fanout_testbed-node-0:barbican-worker:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.920130 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - barbican.workers_fanout_testbed-node-1:barbican-worker:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.920142 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - barbican.workers_fanout_testbed-node-2:barbican-worker:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.920153 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - barbican_notifications.info (vhost: openstack, messages: 0) 2026-04-09 07:32:56.920164 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - central (vhost: openstack, messages: 0) 2026-04-09 07:32:56.920207 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - central.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.920221 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - central.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.920241 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - central.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.920262 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - central_fanout_testbed-node-0:designate-central:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.920370 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - central_fanout_testbed-node-0:designate-central:2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.920395 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - central_fanout_testbed-node-1:designate-central:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.920676 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - central_fanout_testbed-node-1:designate-central:2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.920708 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - central_fanout_testbed-node-2:designate-central:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.920728 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - central_fanout_testbed-node-2:designate-central:2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.920747 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-backup (vhost: openstack, messages: 0) 2026-04-09 07:32:56.920762 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-backup.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.920774 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-backup.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.920784 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-backup.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.920795 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-backup_fanout_testbed-node-0:cinder-backup:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.920948 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-backup_fanout_testbed-node-1:cinder-backup:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.920966 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-backup_fanout_testbed-node-2:cinder-backup:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.920977 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-scheduler (vhost: openstack, messages: 0) 2026-04-09 07:32:56.921421 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.921510 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.921537 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.921556 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-scheduler_fanout_testbed-node-0:cinder-scheduler:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.921572 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-scheduler_fanout_testbed-node-1:cinder-scheduler:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.922108 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-scheduler_fanout_testbed-node-2:cinder-scheduler:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.922197 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-volume (vhost: openstack, messages: 0) 2026-04-09 07:32:56.922212 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: openstack, messages: 0) 2026-04-09 07:32:56.922542 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.922565 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_testbed-node-0:cinder-volume:2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.922577 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: openstack, messages: 0) 2026-04-09 07:32:56.922588 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.922598 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_testbed-node-1:cinder-volume:2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.922608 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: openstack, messages: 0) 2026-04-09 07:32:56.922639 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.922650 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_testbed-node-2:cinder-volume:2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.922731 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-volume_fanout_testbed-node-0:cinder-volume:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.922746 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-volume_fanout_testbed-node-1:cinder-volume:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.922881 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - cinder-volume_fanout_testbed-node-2:cinder-volume:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.922897 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - compute (vhost: openstack, messages: 0) 2026-04-09 07:32:56.922908 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - compute.testbed-node-3 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.922919 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - compute.testbed-node-4 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.923177 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - compute.testbed-node-5 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.923196 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - compute_fanout_testbed-node-3:nova-compute:2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.923525 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - compute_fanout_testbed-node-4:nova-compute:2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.923544 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - compute_fanout_testbed-node-5:nova-compute:2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.923554 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - conductor (vhost: openstack, messages: 0) 2026-04-09 07:32:56.923564 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - conductor.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.923574 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - conductor.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.923584 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - conductor.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.924252 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - conductor_fanout_testbed-node-0:nova-conductor:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.924285 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - conductor_fanout_testbed-node-0:nova-conductor:2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.924297 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - conductor_fanout_testbed-node-1:nova-conductor:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.924339 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - conductor_fanout_testbed-node-1:nova-conductor:2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.924351 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - conductor_fanout_testbed-node-2:nova-conductor:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.924362 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - conductor_fanout_testbed-node-2:nova-conductor:2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.924374 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - event.sample (vhost: openstack, messages: 7) 2026-04-09 07:32:56.924385 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - manila-data (vhost: openstack, messages: 0) 2026-04-09 07:32:56.924488 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - manila-data.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.924504 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - manila-data.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.924514 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - manila-data.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.924642 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - manila-data_fanout_testbed-node-0:manila-data:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.924665 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - manila-data_fanout_testbed-node-1:manila-data:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.924857 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - manila-data_fanout_testbed-node-2:manila-data:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.924877 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - manila-scheduler (vhost: openstack, messages: 0) 2026-04-09 07:32:56.924887 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.924897 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.924907 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.925386 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - manila-scheduler_fanout_testbed-node-0:manila-scheduler:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.925406 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - manila-scheduler_fanout_testbed-node-1:manila-scheduler:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.925416 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - manila-scheduler_fanout_testbed-node-2:manila-scheduler:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.925426 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - manila-share (vhost: openstack, messages: 0) 2026-04-09 07:32:56.925437 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.925447 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.925457 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.925490 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - manila-share_fanout_testbed-node-0:manila-share:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.926108 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - manila-share_fanout_testbed-node-1:manila-share:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.926132 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - manila-share_fanout_testbed-node-2:manila-share:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.926157 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - notifications.audit (vhost: openstack, messages: 0) 2026-04-09 07:32:56.926170 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - notifications.critical (vhost: openstack, messages: 0) 2026-04-09 07:32:56.926182 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - notifications.debug (vhost: openstack, messages: 0) 2026-04-09 07:32:56.926194 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - notifications.error (vhost: openstack, messages: 0) 2026-04-09 07:32:56.926206 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - notifications.info (vhost: openstack, messages: 0) 2026-04-09 07:32:56.926225 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - notifications.sample (vhost: openstack, messages: 0) 2026-04-09 07:32:56.926598 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - notifications.warn (vhost: openstack, messages: 0) 2026-04-09 07:32:56.926620 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - octavia_provisioning_v2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.926631 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.926641 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.926651 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.926821 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - octavia_provisioning_v2_fanout_testbed-node-0:octavia-worker:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.926841 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - octavia_provisioning_v2_fanout_testbed-node-1:octavia-worker:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.926851 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - octavia_provisioning_v2_fanout_testbed-node-2:octavia-worker:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.926861 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - osism-listener-cinder (vhost: openstack, messages: 0) 2026-04-09 07:32:56.927030 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - osism-listener-glance (vhost: openstack, messages: 0) 2026-04-09 07:32:56.927050 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - osism-listener-ironic (vhost: openstack, messages: 0) 2026-04-09 07:32:56.927060 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - osism-listener-keystone (vhost: openstack, messages: 0) 2026-04-09 07:32:56.927070 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - osism-listener-neutron (vhost: openstack, messages: 0) 2026-04-09 07:32:56.927080 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - osism-listener-nova (vhost: openstack, messages: 0) 2026-04-09 07:32:56.927090 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - producer (vhost: openstack, messages: 0) 2026-04-09 07:32:56.927214 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - producer.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.927232 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - producer.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.927242 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - producer.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.927251 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - producer_fanout_testbed-node-0:designate-producer:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.927366 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - producer_fanout_testbed-node-0:designate-producer:2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.927382 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - producer_fanout_testbed-node-1:designate-producer:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.927403 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - producer_fanout_testbed-node-1:designate-producer:2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.927413 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - producer_fanout_testbed-node-2:designate-producer:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.927603 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - producer_fanout_testbed-node-2:designate-producer:2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.928088 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-plugin (vhost: openstack, messages: 0) 2026-04-09 07:32:56.928109 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-plugin.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.928119 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-plugin.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.928129 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-plugin.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.928139 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-plugin_fanout_testbed-node-0:neutron-server:4 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.928149 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-plugin_fanout_testbed-node-0:neutron-server:5 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.928166 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-plugin_fanout_testbed-node-0:neutron-server:6 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.928513 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-plugin_fanout_testbed-node-1:neutron-server:4 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.928534 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-plugin_fanout_testbed-node-1:neutron-server:5 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.928545 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-plugin_fanout_testbed-node-1:neutron-server:6 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.928555 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-plugin_fanout_testbed-node-2:neutron-server:4 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.928579 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-plugin_fanout_testbed-node-2:neutron-server:5 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.928590 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-plugin_fanout_testbed-node-2:neutron-server:6 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.928600 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-reports-plugin (vhost: openstack, messages: 0) 2026-04-09 07:32:56.928611 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.928621 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.928631 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.928707 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.928721 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:10 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.929713 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:11 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.929739 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:12 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.929749 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.929770 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:3 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930221 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930247 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:10 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930258 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:11 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930269 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:12 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930279 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930289 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:3 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930298 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930309 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:10 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930319 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:11 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930329 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:12 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930339 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930349 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:3 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930383 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-server-resource-versions (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930393 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930403 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930413 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930423 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-server-resource-versions_fanout_testbed-node-0:neutron-server:7 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930433 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-server-resource-versions_fanout_testbed-node-0:neutron-server:8 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930452 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-server-resource-versions_fanout_testbed-node-0:neutron-server:9 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930489 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-server-resource-versions_fanout_testbed-node-1:neutron-server:7 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930580 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-server-resource-versions_fanout_testbed-node-1:neutron-server:8 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930594 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-server-resource-versions_fanout_testbed-node-1:neutron-server:9 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930615 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-server-resource-versions_fanout_testbed-node-2:neutron-server:7 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930625 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-server-resource-versions_fanout_testbed-node-2:neutron-server:8 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930635 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - q-server-resource-versions_fanout_testbed-node-2:neutron-server:9 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930644 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - reply_testbed-node-0:designate-manage:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930654 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - reply_testbed-node-0:designate-producer:3 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930844 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - reply_testbed-node-0:designate-producer:4 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930863 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - reply_testbed-node-1:designate-producer:3 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930873 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - reply_testbed-node-1:designate-producer:4 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930883 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - reply_testbed-node-2:designate-producer:3 (vhost: openstack, messages: 1) 2026-04-09 07:32:56.930893 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - reply_testbed-node-2:designate-producer:4 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930903 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - reply_testbed-node-3:nova-compute:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.930913 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - reply_testbed-node-4:nova-compute:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.931025 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - reply_testbed-node-5:nova-compute:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.931250 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - scheduler (vhost: openstack, messages: 0) 2026-04-09 07:32:56.931283 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - scheduler.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.931357 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - scheduler.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.931385 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - scheduler.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.931396 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - scheduler_fanout_testbed-node-0:nova-scheduler:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.931407 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - scheduler_fanout_testbed-node-0:nova-scheduler:2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.931505 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - scheduler_fanout_testbed-node-1:nova-scheduler:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.931521 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - scheduler_fanout_testbed-node-1:nova-scheduler:2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.931644 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - scheduler_fanout_testbed-node-2:nova-scheduler:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.931915 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - scheduler_fanout_testbed-node-2:nova-scheduler:2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.931933 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - worker (vhost: openstack, messages: 0) 2026-04-09 07:32:56.931944 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - worker.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.932168 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - worker.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.932186 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - worker.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.932197 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - worker_fanout_testbed-node-0:designate-worker:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.932206 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - worker_fanout_testbed-node-0:designate-worker:2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.932217 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - worker_fanout_testbed-node-1:designate-worker:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.932227 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - worker_fanout_testbed-node-1:designate-worker:2 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.932237 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - worker_fanout_testbed-node-2:designate-worker:1 (vhost: openstack, messages: 0) 2026-04-09 07:32:56.932247 | orchestrator | 2026-04-09 07:32:56 | INFO  |  - worker_fanout_testbed-node-2:designate-worker:2 (vhost: openstack, messages: 0) 2026-04-09 07:32:57.189633 | orchestrator | + osism migrate rabbitmq3to4 delete-exchanges 2026-04-09 07:33:03.558608 | orchestrator | 2026-04-09 07:33:03 | ERROR  | Unable to get ansible vault password 2026-04-09 07:33:03.558683 | orchestrator | 2026-04-09 07:33:03 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 07:33:03.558691 | orchestrator | 2026-04-09 07:33:03 | ERROR  | Dropping encrypted entries 2026-04-09 07:33:03.592946 | orchestrator | 2026-04-09 07:33:03 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-09 07:33:03.612782 | orchestrator | 2026-04-09 07:33:03 | INFO  | Found 27 exchange(s) in vhost '/' 2026-04-09 07:33:03.655593 | orchestrator | 2026-04-09 07:33:03 | INFO  | Deleted exchange: aodh 2026-04-09 07:33:03.695993 | orchestrator | 2026-04-09 07:33:03 | INFO  | Deleted exchange: ceilometer 2026-04-09 07:33:03.737691 | orchestrator | 2026-04-09 07:33:03 | INFO  | Deleted exchange: cinder 2026-04-09 07:33:03.783540 | orchestrator | 2026-04-09 07:33:03 | INFO  | Deleted exchange: designate 2026-04-09 07:33:03.817483 | orchestrator | 2026-04-09 07:33:03 | INFO  | Deleted exchange: dns 2026-04-09 07:33:03.854380 | orchestrator | 2026-04-09 07:33:03 | INFO  | Deleted exchange: glance 2026-04-09 07:33:03.905650 | orchestrator | 2026-04-09 07:33:03 | INFO  | Deleted exchange: heat 2026-04-09 07:33:03.937360 | orchestrator | 2026-04-09 07:33:03 | INFO  | Deleted exchange: ironic 2026-04-09 07:33:03.975910 | orchestrator | 2026-04-09 07:33:03 | INFO  | Deleted exchange: keystone 2026-04-09 07:33:04.014189 | orchestrator | 2026-04-09 07:33:04 | INFO  | Deleted exchange: l3_agent_fanout 2026-04-09 07:33:04.068024 | orchestrator | 2026-04-09 07:33:04 | INFO  | Deleted exchange: magnum 2026-04-09 07:33:04.117579 | orchestrator | 2026-04-09 07:33:04 | INFO  | Deleted exchange: magnum-conductor_fanout 2026-04-09 07:33:04.154664 | orchestrator | 2026-04-09 07:33:04 | INFO  | Deleted exchange: neutron 2026-04-09 07:33:04.187861 | orchestrator | 2026-04-09 07:33:04 | INFO  | Deleted exchange: neutron-vo-Network-1.1_fanout 2026-04-09 07:33:04.219734 | orchestrator | 2026-04-09 07:33:04 | INFO  | Deleted exchange: neutron-vo-Port-1.10_fanout 2026-04-09 07:33:04.249390 | orchestrator | 2026-04-09 07:33:04 | INFO  | Deleted exchange: neutron-vo-SecurityGroup-1.6_fanout 2026-04-09 07:33:04.281728 | orchestrator | 2026-04-09 07:33:04 | INFO  | Deleted exchange: neutron-vo-SecurityGroupRule-1.3_fanout 2026-04-09 07:33:04.313897 | orchestrator | 2026-04-09 07:33:04 | INFO  | Deleted exchange: neutron-vo-Subnet-1.2_fanout 2026-04-09 07:33:04.348697 | orchestrator | 2026-04-09 07:33:04 | INFO  | Deleted exchange: nova 2026-04-09 07:33:04.390382 | orchestrator | 2026-04-09 07:33:04 | INFO  | Deleted exchange: octavia 2026-04-09 07:33:04.428003 | orchestrator | 2026-04-09 07:33:04 | INFO  | Deleted exchange: openstack 2026-04-09 07:33:04.458882 | orchestrator | 2026-04-09 07:33:04 | INFO  | Deleted exchange: q-agent-notifier-port-update_fanout 2026-04-09 07:33:04.497083 | orchestrator | 2026-04-09 07:33:04 | INFO  | Deleted exchange: q-agent-notifier-security_group-update_fanout 2026-04-09 07:33:04.533676 | orchestrator | 2026-04-09 07:33:04 | INFO  | Deleted exchange: scheduler_fanout 2026-04-09 07:33:04.569887 | orchestrator | 2026-04-09 07:33:04 | INFO  | Deleted exchange: swift 2026-04-09 07:33:04.610647 | orchestrator | 2026-04-09 07:33:04 | INFO  | Deleted exchange: trove 2026-04-09 07:33:04.648857 | orchestrator | 2026-04-09 07:33:04 | INFO  | Deleted exchange: zaqar 2026-04-09 07:33:04.648941 | orchestrator | 2026-04-09 07:33:04 | INFO  | Successfully deleted 27 exchange(s) in vhost '/' 2026-04-09 07:33:04.918287 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-04-09 07:33:11.105193 | orchestrator | 2026-04-09 07:33:11 | ERROR  | Unable to get ansible vault password 2026-04-09 07:33:11.105300 | orchestrator | 2026-04-09 07:33:11 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 07:33:11.105318 | orchestrator | 2026-04-09 07:33:11 | ERROR  | Dropping encrypted entries 2026-04-09 07:33:11.139658 | orchestrator | 2026-04-09 07:33:11 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-09 07:33:11.152766 | orchestrator | 2026-04-09 07:33:11 | INFO  | No exchanges found in vhost '/' 2026-04-09 07:33:11.412972 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-09 07:33:11.413090 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/400-monitoring.sh 2026-04-09 07:33:12.699262 | orchestrator | 2026-04-09 07:33:12 | INFO  | Prepare task for execution of prometheus. 2026-04-09 07:33:12.765289 | orchestrator | 2026-04-09 07:33:12 | INFO  | Task 10eda340-a9bb-464c-b820-0805d1344235 (prometheus) was prepared for execution. 2026-04-09 07:33:12.765411 | orchestrator | 2026-04-09 07:33:12 | INFO  | It takes a moment until task 10eda340-a9bb-464c-b820-0805d1344235 (prometheus) has been started and output is visible here. 2026-04-09 07:33:33.745884 | orchestrator | 2026-04-09 07:33:33.746096 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 07:33:33.746116 | orchestrator | 2026-04-09 07:33:33.746126 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 07:33:33.746137 | orchestrator | Thursday 09 April 2026 07:33:17 +0000 (0:00:01.783) 0:00:01.783 ******** 2026-04-09 07:33:33.746147 | orchestrator | ok: [testbed-manager] 2026-04-09 07:33:33.746159 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:33:33.746169 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:33:33.746179 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:33:33.746189 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:33:33.746198 | orchestrator | ok: [testbed-node-4] 2026-04-09 07:33:33.746208 | orchestrator | ok: [testbed-node-5] 2026-04-09 07:33:33.746218 | orchestrator | 2026-04-09 07:33:33.746228 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 07:33:33.746238 | orchestrator | Thursday 09 April 2026 07:33:20 +0000 (0:00:02.567) 0:00:04.350 ******** 2026-04-09 07:33:33.746277 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-09 07:33:33.746288 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-09 07:33:33.746297 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-09 07:33:33.746307 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-09 07:33:33.746317 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-09 07:33:33.746327 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-09 07:33:33.746337 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-09 07:33:33.746346 | orchestrator | 2026-04-09 07:33:33.746356 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-09 07:33:33.746366 | orchestrator | 2026-04-09 07:33:33.746375 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-09 07:33:33.746385 | orchestrator | Thursday 09 April 2026 07:33:25 +0000 (0:00:05.256) 0:00:09.607 ******** 2026-04-09 07:33:33.746419 | orchestrator | included: /ansible/roles/prometheus/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 07:33:33.746433 | orchestrator | 2026-04-09 07:33:33.746446 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-09 07:33:33.746457 | orchestrator | Thursday 09 April 2026 07:33:31 +0000 (0:00:05.501) 0:00:15.108 ******** 2026-04-09 07:33:33.746496 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-09 07:33:33.746515 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:33:33.746529 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:33:33.746563 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:33:33.746588 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:33:33.746599 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:33.746610 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:33.746626 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:33:33.746637 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:33:33.746647 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:33.746658 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:33.746684 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:33:34.439155 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:33:34.439289 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:34.439332 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:33:34.439348 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:33:34.439362 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:34.439374 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:33:34.439468 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:33:34.439483 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:33:34.439494 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:34.439513 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:33:34.439525 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 07:33:34.439536 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:33:34.439548 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 07:33:34.439576 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:41.458983 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 07:33:41.459124 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:41.459141 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:41.459154 | orchestrator | 2026-04-09 07:33:41.459189 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-09 07:33:41.459203 | orchestrator | Thursday 09 April 2026 07:33:35 +0000 (0:00:04.702) 0:00:19.811 ******** 2026-04-09 07:33:41.459216 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 07:33:41.459230 | orchestrator | 2026-04-09 07:33:41.459241 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-09 07:33:41.459253 | orchestrator | Thursday 09 April 2026 07:33:38 +0000 (0:00:02.694) 0:00:22.505 ******** 2026-04-09 07:33:41.459265 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:33:41.459277 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:33:41.459399 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-09 07:33:41.459418 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:33:41.459430 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:33:41.459448 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:33:41.459461 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:33:41.459474 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:33:41.459496 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:41.459510 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:41.459532 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:43.028698 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:33:43.028832 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:33:43.028872 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:33:43.028886 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:33:43.028923 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:43.028939 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:43.028951 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:43.028986 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 07:33:43.029001 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 07:33:43.029022 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:33:43.029037 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 07:33:43.029057 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:33:43.029069 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:33:43.029089 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:33:46.291682 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:46.291826 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:46.291863 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:46.291876 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:46.291917 | orchestrator | 2026-04-09 07:33:46.291931 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-09 07:33:46.291944 | orchestrator | Thursday 09 April 2026 07:33:45 +0000 (0:00:06.367) 0:00:28.873 ******** 2026-04-09 07:33:46.291962 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-09 07:33:46.291976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 07:33:46.292008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:33:46.292022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:33:46.292040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 07:33:46.292053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 07:33:46.292073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:33:46.292085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 07:33:46.292097 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 07:33:46.292116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:33:47.264510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:33:47.264647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:33:47.264687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 07:33:47.264729 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:33:47.264745 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 07:33:47.264758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:33:47.264773 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:33:47.264806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:33:47.264819 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:33:47.264831 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:33:47.264844 | orchestrator | skipping: [testbed-manager] 2026-04-09 07:33:47.264861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 07:33:47.264882 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 07:33:47.264894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:33:47.264905 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:33:47.264917 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 07:33:47.264928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 07:33:47.264949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 07:33:50.097938 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:33:50.098153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 07:33:50.098187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 07:33:50.098239 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:33:50.098270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 07:33:50.098282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 07:33:50.098293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 07:33:50.098303 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:33:50.098314 | orchestrator | 2026-04-09 07:33:50.098325 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-09 07:33:50.098337 | orchestrator | Thursday 09 April 2026 07:33:48 +0000 (0:00:03.709) 0:00:32.582 ******** 2026-04-09 07:33:50.098417 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-09 07:33:50.098432 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 07:33:50.098451 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 07:33:50.098466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 07:33:50.098477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 07:33:50.098489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:33:50.098499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 07:33:50.098509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:33:50.098527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:33:50.850015 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 07:33:50.850223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:33:50.850243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:33:50.850256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:33:50.850269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 07:33:50.850281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 07:33:50.850318 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:33:50.850478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 07:33:50.850508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:33:50.850521 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:33:50.850536 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 07:33:50.850548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:33:50.850559 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:33:50.850570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 07:33:50.850582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 07:33:50.850604 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:33:50.850626 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:33:56.183195 | orchestrator | skipping: [testbed-manager] 2026-04-09 07:33:56.183345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 07:33:56.183429 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 07:33:56.183443 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:33:56.183456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:33:56.183469 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:33:56.183481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 07:33:56.183493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 07:33:56.183504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 07:33:56.183541 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:33:56.183553 | orchestrator | 2026-04-09 07:33:56.183566 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-09 07:33:56.183579 | orchestrator | Thursday 09 April 2026 07:33:52 +0000 (0:00:04.145) 0:00:36.728 ******** 2026-04-09 07:33:56.183611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:33:56.183633 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-09 07:33:56.183647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:33:56.183659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:33:56.183671 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:33:56.183683 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:33:56.183704 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:33:56.183727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:58.163021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:58.163153 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:33:58.163166 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:33:58.163175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:58.163181 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:33:58.163207 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:33:58.163215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:58.163242 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:33:58.163250 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 07:33:58.163257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:58.163264 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 07:33:58.163270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:33:58.163311 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 07:33:58.163318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:33:58.163333 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:34:29.081244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:34:29.081411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:34:29.081426 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:34:29.081455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:34:29.081464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:34:29.081471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:34:29.081479 | orchestrator | 2026-04-09 07:34:29.081487 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-09 07:34:29.081496 | orchestrator | Thursday 09 April 2026 07:34:00 +0000 (0:00:07.799) 0:00:44.527 ******** 2026-04-09 07:34:29.081503 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 07:34:29.081511 | orchestrator | 2026-04-09 07:34:29.081518 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-09 07:34:29.081525 | orchestrator | Thursday 09 April 2026 07:34:02 +0000 (0:00:02.294) 0:00:46.822 ******** 2026-04-09 07:34:29.081532 | orchestrator | skipping: [testbed-manager] 2026-04-09 07:34:29.081540 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:34:29.081546 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:34:29.081553 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:34:29.081560 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:34:29.081567 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:34:29.081574 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:34:29.081581 | orchestrator | 2026-04-09 07:34:29.081588 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-09 07:34:29.081609 | orchestrator | Thursday 09 April 2026 07:34:04 +0000 (0:00:01.965) 0:00:48.787 ******** 2026-04-09 07:34:29.081616 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 07:34:29.081623 | orchestrator | 2026-04-09 07:34:29.081630 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-09 07:34:29.081636 | orchestrator | Thursday 09 April 2026 07:34:06 +0000 (0:00:01.735) 0:00:50.523 ******** 2026-04-09 07:34:29.081649 | orchestrator | [WARNING]: Skipped 2026-04-09 07:34:29.081658 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 07:34:29.081666 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-09 07:34:29.081673 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 07:34:29.081679 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-09 07:34:29.081687 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 07:34:29.081694 | orchestrator | [WARNING]: Skipped 2026-04-09 07:34:29.081700 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 07:34:29.081713 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-09 07:34:29.081720 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 07:34:29.081727 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-09 07:34:29.081734 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 07:34:29.081740 | orchestrator | [WARNING]: Skipped 2026-04-09 07:34:29.081747 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 07:34:29.081754 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-09 07:34:29.081761 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 07:34:29.081768 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-09 07:34:29.081776 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 07:34:29.081786 | orchestrator | [WARNING]: Skipped 2026-04-09 07:34:29.081793 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 07:34:29.081801 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-09 07:34:29.081808 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 07:34:29.081817 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-09 07:34:29.081825 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 07:34:29.081832 | orchestrator | [WARNING]: Skipped 2026-04-09 07:34:29.081839 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 07:34:29.081847 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-09 07:34:29.081854 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 07:34:29.081861 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-09 07:34:29.081870 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 07:34:29.081877 | orchestrator | [WARNING]: Skipped 2026-04-09 07:34:29.081884 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 07:34:29.081893 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-09 07:34:29.081901 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 07:34:29.081909 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-09 07:34:29.081916 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 07:34:29.081924 | orchestrator | [WARNING]: Skipped 2026-04-09 07:34:29.081931 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 07:34:29.081937 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-09 07:34:29.081944 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 07:34:29.081951 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-09 07:34:29.081957 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 07:34:29.081964 | orchestrator | 2026-04-09 07:34:29.081970 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-09 07:34:29.081978 | orchestrator | Thursday 09 April 2026 07:34:09 +0000 (0:00:03.102) 0:00:53.625 ******** 2026-04-09 07:34:29.081985 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 07:34:29.081993 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:34:29.082000 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 07:34:29.082007 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:34:29.082014 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 07:34:29.082074 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:34:29.082081 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 07:34:29.082088 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:34:29.082102 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 07:34:29.082108 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:34:29.082114 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 07:34:29.082121 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:34:29.082127 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-09 07:34:29.082133 | orchestrator | 2026-04-09 07:34:29.082140 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-09 07:34:29.082146 | orchestrator | Thursday 09 April 2026 07:34:28 +0000 (0:00:18.724) 0:01:12.349 ******** 2026-04-09 07:34:29.082164 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 07:37:07.791216 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:37:07.791361 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 07:37:07.791397 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 07:37:07.791410 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:37:07.791422 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:37:07.791434 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 07:37:07.791446 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:37:07.791457 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 07:37:07.791468 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:37:07.791480 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 07:37:07.791491 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:37:07.791502 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-09 07:37:07.791513 | orchestrator | 2026-04-09 07:37:07.791525 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-09 07:37:07.791536 | orchestrator | Thursday 09 April 2026 07:34:33 +0000 (0:00:04.836) 0:01:17.186 ******** 2026-04-09 07:37:07.791549 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 07:37:07.791562 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 07:37:07.791573 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:37:07.791584 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:37:07.791596 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 07:37:07.791607 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:37:07.791618 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-09 07:37:07.791630 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 07:37:07.791641 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:37:07.791652 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 07:37:07.791663 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:37:07.791674 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 07:37:07.791686 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:37:07.791697 | orchestrator | 2026-04-09 07:37:07.791709 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-09 07:37:07.791747 | orchestrator | Thursday 09 April 2026 07:34:36 +0000 (0:00:02.942) 0:01:20.129 ******** 2026-04-09 07:37:07.791761 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 07:37:07.791774 | orchestrator | 2026-04-09 07:37:07.791789 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-09 07:37:07.791803 | orchestrator | Thursday 09 April 2026 07:34:38 +0000 (0:00:01.782) 0:01:21.912 ******** 2026-04-09 07:37:07.791816 | orchestrator | skipping: [testbed-manager] 2026-04-09 07:37:07.791830 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:37:07.791844 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:37:07.791856 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:37:07.791870 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:37:07.791883 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:37:07.791895 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:37:07.791909 | orchestrator | 2026-04-09 07:37:07.791922 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-09 07:37:07.791936 | orchestrator | Thursday 09 April 2026 07:34:39 +0000 (0:00:01.870) 0:01:23.782 ******** 2026-04-09 07:37:07.791950 | orchestrator | skipping: [testbed-manager] 2026-04-09 07:37:07.791995 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:37:07.792012 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:37:07.792025 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:37:07.792039 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:37:07.792057 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:37:07.792076 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:37:07.792093 | orchestrator | 2026-04-09 07:37:07.792112 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-09 07:37:07.792130 | orchestrator | Thursday 09 April 2026 07:34:43 +0000 (0:00:03.361) 0:01:27.143 ******** 2026-04-09 07:37:07.792148 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 07:37:07.792169 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 07:37:07.792187 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 07:37:07.792206 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:37:07.792218 | orchestrator | skipping: [testbed-manager] 2026-04-09 07:37:07.792229 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:37:07.792240 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 07:37:07.792251 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:37:07.792282 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 07:37:07.792293 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:37:07.792311 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 07:37:07.792323 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:37:07.792334 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 07:37:07.792345 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:37:07.792356 | orchestrator | 2026-04-09 07:37:07.792367 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-09 07:37:07.792378 | orchestrator | Thursday 09 April 2026 07:34:46 +0000 (0:00:02.965) 0:01:30.108 ******** 2026-04-09 07:37:07.792389 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 07:37:07.792400 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 07:37:07.792411 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:37:07.792422 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 07:37:07.792433 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:37:07.792456 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:37:07.792467 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 07:37:07.792478 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:37:07.792489 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-09 07:37:07.792500 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 07:37:07.792511 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:37:07.792522 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 07:37:07.792533 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:37:07.792544 | orchestrator | 2026-04-09 07:37:07.792555 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-09 07:37:07.792566 | orchestrator | Thursday 09 April 2026 07:34:49 +0000 (0:00:02.846) 0:01:32.955 ******** 2026-04-09 07:37:07.792577 | orchestrator | [WARNING]: Skipped 2026-04-09 07:37:07.792589 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-09 07:37:07.792600 | orchestrator | due to this access issue: 2026-04-09 07:37:07.792611 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-09 07:37:07.792622 | orchestrator | not a directory 2026-04-09 07:37:07.792633 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 07:37:07.792644 | orchestrator | 2026-04-09 07:37:07.792655 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-09 07:37:07.792665 | orchestrator | Thursday 09 April 2026 07:34:51 +0000 (0:00:02.429) 0:01:35.384 ******** 2026-04-09 07:37:07.792676 | orchestrator | skipping: [testbed-manager] 2026-04-09 07:37:07.792687 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:37:07.792698 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:37:07.792709 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:37:07.792720 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:37:07.792731 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:37:07.792742 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:37:07.792759 | orchestrator | 2026-04-09 07:37:07.792779 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-09 07:37:07.792799 | orchestrator | Thursday 09 April 2026 07:34:53 +0000 (0:00:01.952) 0:01:37.337 ******** 2026-04-09 07:37:07.792818 | orchestrator | skipping: [testbed-manager] 2026-04-09 07:37:07.792831 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:37:07.792842 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:37:07.792853 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:37:07.792863 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:37:07.792874 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:37:07.792885 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:37:07.792896 | orchestrator | 2026-04-09 07:37:07.792907 | orchestrator | TASK [prometheus : Check for the existence of Prometheus v2 container volume] *** 2026-04-09 07:37:07.792919 | orchestrator | Thursday 09 April 2026 07:34:55 +0000 (0:00:02.459) 0:01:39.797 ******** 2026-04-09 07:37:07.792936 | orchestrator | ok: [testbed-manager] 2026-04-09 07:37:07.793019 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:37:07.793047 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:37:07.793064 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:37:07.793082 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:37:07.793100 | orchestrator | ok: [testbed-node-4] 2026-04-09 07:37:07.793117 | orchestrator | ok: [testbed-node-5] 2026-04-09 07:37:07.793135 | orchestrator | 2026-04-09 07:37:07.793154 | orchestrator | TASK [prometheus : Gracefully stop Prometheus] ********************************* 2026-04-09 07:37:07.793173 | orchestrator | Thursday 09 April 2026 07:34:58 +0000 (0:00:02.427) 0:01:42.224 ******** 2026-04-09 07:37:07.793191 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:37:07.793210 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:37:07.793243 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:37:07.793262 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:37:07.793281 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:37:07.793300 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:37:07.793312 | orchestrator | changed: [testbed-manager] 2026-04-09 07:37:07.793323 | orchestrator | 2026-04-09 07:37:07.793334 | orchestrator | TASK [prometheus : Create new Prometheus v3 volume] **************************** 2026-04-09 07:37:07.793345 | orchestrator | Thursday 09 April 2026 07:37:06 +0000 (0:02:08.454) 0:03:50.679 ******** 2026-04-09 07:37:07.793356 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:37:07.793367 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:37:07.793379 | orchestrator | changed: [testbed-manager] 2026-04-09 07:37:07.793389 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:37:07.793400 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:37:07.793426 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:37:16.057781 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:37:16.057893 | orchestrator | 2026-04-09 07:37:16.057910 | orchestrator | TASK [prometheus : Move _data from old to new volume] ************************** 2026-04-09 07:37:16.057998 | orchestrator | Thursday 09 April 2026 07:37:09 +0000 (0:00:02.178) 0:03:52.858 ******** 2026-04-09 07:37:16.058075 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:37:16.058089 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:37:16.058101 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:37:16.058112 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:37:16.058123 | orchestrator | changed: [testbed-manager] 2026-04-09 07:37:16.058134 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:37:16.058146 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:37:16.058157 | orchestrator | 2026-04-09 07:37:16.058169 | orchestrator | TASK [prometheus : Remove old Prometheus v2 volume] **************************** 2026-04-09 07:37:16.058180 | orchestrator | Thursday 09 April 2026 07:37:11 +0000 (0:00:02.019) 0:03:54.877 ******** 2026-04-09 07:37:16.058191 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:37:16.058202 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:37:16.058213 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:37:16.058224 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:37:16.058235 | orchestrator | changed: [testbed-manager] 2026-04-09 07:37:16.058246 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:37:16.058257 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:37:16.058268 | orchestrator | 2026-04-09 07:37:16.058279 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-04-09 07:37:16.058291 | orchestrator | Thursday 09 April 2026 07:37:13 +0000 (0:00:02.390) 0:03:57.268 ******** 2026-04-09 07:37:16.058309 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-09 07:37:16.058329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:37:16.058366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:37:16.058381 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:37:16.058421 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:37:16.058436 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:37:16.058448 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:37:16.058460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 07:37:16.058472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:37:16.058493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:37:16.058506 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:37:16.058519 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:37:16.058542 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:37:18.569259 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:37:18.569370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:37:18.569386 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 07:37:18.569425 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 07:37:18.569439 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:37:18.569450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:37:18.569488 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 07:37:18.569499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:37:18.569508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:37:18.569518 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:37:18.569534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:37:18.569544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:37:18.569553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 07:37:18.569566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:37:18.569583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:37:22.781679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 07:37:22.781789 | orchestrator | 2026-04-09 07:37:22.781810 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-04-09 07:37:22.781850 | orchestrator | Thursday 09 April 2026 07:37:19 +0000 (0:00:06.488) 0:04:03.757 ******** 2026-04-09 07:37:22.781863 | orchestrator | changed: [testbed-manager] => { 2026-04-09 07:37:22.781876 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:37:22.781887 | orchestrator | } 2026-04-09 07:37:22.781899 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 07:37:22.782065 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:37:22.782084 | orchestrator | } 2026-04-09 07:37:22.782096 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 07:37:22.782107 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:37:22.782118 | orchestrator | } 2026-04-09 07:37:22.782129 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 07:37:22.782141 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:37:22.782197 | orchestrator | } 2026-04-09 07:37:22.782212 | orchestrator | changed: [testbed-node-3] => { 2026-04-09 07:37:22.782225 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:37:22.782238 | orchestrator | } 2026-04-09 07:37:22.782251 | orchestrator | changed: [testbed-node-4] => { 2026-04-09 07:37:22.782264 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:37:22.782276 | orchestrator | } 2026-04-09 07:37:22.782289 | orchestrator | changed: [testbed-node-5] => { 2026-04-09 07:37:22.782302 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:37:22.782315 | orchestrator | } 2026-04-09 07:37:22.782328 | orchestrator | 2026-04-09 07:37:22.782341 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 07:37:22.782354 | orchestrator | Thursday 09 April 2026 07:37:22 +0000 (0:00:02.189) 0:04:05.946 ******** 2026-04-09 07:37:22.782371 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-09 07:37:22.782389 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 07:37:22.782425 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 07:37:22.782470 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:37:22.782504 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:37:22.782526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 07:37:22.782547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:37:22.782568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:37:22.782587 | orchestrator | skipping: [testbed-manager] 2026-04-09 07:37:22.782617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 07:37:22.782636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:37:22.782681 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:37:23.406555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 07:37:23.406661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:37:23.406680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:37:23.406695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 07:37:23.406707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:37:23.406719 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:37:23.406733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 07:37:23.406763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:37:23.406815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:37:23.406829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 07:37:23.406841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 07:37:23.406852 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:37:23.406864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 07:37:23.406875 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 07:37:23.406887 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 07:37:23.406904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 07:37:23.406923 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:37:23.406991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 07:39:51.951271 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 07:39:51.951394 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:39:51.951423 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 07:39:51.951447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 07:39:51.951467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 07:39:51.951495 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:39:51.951517 | orchestrator | 2026-04-09 07:39:51.951537 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 07:39:51.951559 | orchestrator | Thursday 09 April 2026 07:37:25 +0000 (0:00:02.954) 0:04:08.900 ******** 2026-04-09 07:39:51.951577 | orchestrator | 2026-04-09 07:39:51.951597 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 07:39:51.951609 | orchestrator | Thursday 09 April 2026 07:37:25 +0000 (0:00:00.476) 0:04:09.377 ******** 2026-04-09 07:39:51.951620 | orchestrator | 2026-04-09 07:39:51.951631 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 07:39:51.951668 | orchestrator | Thursday 09 April 2026 07:37:26 +0000 (0:00:00.476) 0:04:09.853 ******** 2026-04-09 07:39:51.951679 | orchestrator | 2026-04-09 07:39:51.951690 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 07:39:51.951701 | orchestrator | Thursday 09 April 2026 07:37:26 +0000 (0:00:00.449) 0:04:10.302 ******** 2026-04-09 07:39:51.951712 | orchestrator | 2026-04-09 07:39:51.951780 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 07:39:51.951793 | orchestrator | Thursday 09 April 2026 07:37:26 +0000 (0:00:00.461) 0:04:10.764 ******** 2026-04-09 07:39:51.951807 | orchestrator | 2026-04-09 07:39:51.951819 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 07:39:51.951832 | orchestrator | Thursday 09 April 2026 07:37:27 +0000 (0:00:00.712) 0:04:11.477 ******** 2026-04-09 07:39:51.951844 | orchestrator | 2026-04-09 07:39:51.951872 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 07:39:51.951886 | orchestrator | Thursday 09 April 2026 07:37:28 +0000 (0:00:00.445) 0:04:11.922 ******** 2026-04-09 07:39:51.951898 | orchestrator | 2026-04-09 07:39:51.951912 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-09 07:39:51.951924 | orchestrator | Thursday 09 April 2026 07:37:28 +0000 (0:00:00.824) 0:04:12.747 ******** 2026-04-09 07:39:51.951937 | orchestrator | changed: [testbed-manager] 2026-04-09 07:39:51.951950 | orchestrator | 2026-04-09 07:39:51.951962 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-09 07:39:51.951975 | orchestrator | Thursday 09 April 2026 07:37:53 +0000 (0:00:24.764) 0:04:37.511 ******** 2026-04-09 07:39:51.951987 | orchestrator | changed: [testbed-node-3] 2026-04-09 07:39:51.952000 | orchestrator | changed: [testbed-manager] 2026-04-09 07:39:51.952013 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:39:51.952026 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:39:51.952038 | orchestrator | changed: [testbed-node-5] 2026-04-09 07:39:51.952051 | orchestrator | changed: [testbed-node-4] 2026-04-09 07:39:51.952111 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:39:51.952124 | orchestrator | 2026-04-09 07:39:51.952138 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-09 07:39:51.952172 | orchestrator | Thursday 09 April 2026 07:38:12 +0000 (0:00:18.809) 0:04:56.320 ******** 2026-04-09 07:39:51.952184 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:39:51.952195 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:39:51.952206 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:39:51.952217 | orchestrator | 2026-04-09 07:39:51.952228 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-09 07:39:51.952240 | orchestrator | Thursday 09 April 2026 07:38:25 +0000 (0:00:13.341) 0:05:09.662 ******** 2026-04-09 07:39:51.952251 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:39:51.952262 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:39:51.952273 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:39:51.952283 | orchestrator | 2026-04-09 07:39:51.952294 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-09 07:39:51.952305 | orchestrator | Thursday 09 April 2026 07:38:39 +0000 (0:00:13.664) 0:05:23.326 ******** 2026-04-09 07:39:51.952322 | orchestrator | changed: [testbed-node-4] 2026-04-09 07:39:51.952340 | orchestrator | changed: [testbed-node-3] 2026-04-09 07:39:51.952357 | orchestrator | changed: [testbed-manager] 2026-04-09 07:39:51.952375 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:39:51.952400 | orchestrator | changed: [testbed-node-5] 2026-04-09 07:39:51.952420 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:39:51.952439 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:39:51.952457 | orchestrator | 2026-04-09 07:39:51.952475 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-09 07:39:51.952498 | orchestrator | Thursday 09 April 2026 07:38:57 +0000 (0:00:17.979) 0:05:41.306 ******** 2026-04-09 07:39:51.952523 | orchestrator | changed: [testbed-manager] 2026-04-09 07:39:51.952558 | orchestrator | 2026-04-09 07:39:51.952578 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-09 07:39:51.952595 | orchestrator | Thursday 09 April 2026 07:39:12 +0000 (0:00:15.182) 0:05:56.488 ******** 2026-04-09 07:39:51.952611 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:39:51.952622 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:39:51.952633 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:39:51.952644 | orchestrator | 2026-04-09 07:39:51.952655 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-09 07:39:51.952666 | orchestrator | Thursday 09 April 2026 07:39:26 +0000 (0:00:13.496) 0:06:09.985 ******** 2026-04-09 07:39:51.952676 | orchestrator | changed: [testbed-manager] 2026-04-09 07:39:51.952687 | orchestrator | 2026-04-09 07:39:51.952698 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-09 07:39:51.952709 | orchestrator | Thursday 09 April 2026 07:39:38 +0000 (0:00:12.709) 0:06:22.695 ******** 2026-04-09 07:39:51.952747 | orchestrator | changed: [testbed-node-3] 2026-04-09 07:39:51.952760 | orchestrator | changed: [testbed-node-4] 2026-04-09 07:39:51.952771 | orchestrator | changed: [testbed-node-5] 2026-04-09 07:39:51.952782 | orchestrator | 2026-04-09 07:39:51.952793 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 07:39:51.952805 | orchestrator | testbed-manager : ok=28  changed=14  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-09 07:39:51.952818 | orchestrator | testbed-node-0 : ok=17  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-09 07:39:51.952829 | orchestrator | testbed-node-1 : ok=17  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-09 07:39:51.952841 | orchestrator | testbed-node-2 : ok=17  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-09 07:39:51.952852 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 07:39:51.952863 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 07:39:51.952874 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 07:39:51.952885 | orchestrator | 2026-04-09 07:39:51.952896 | orchestrator | 2026-04-09 07:39:51.952907 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 07:39:51.952918 | orchestrator | Thursday 09 April 2026 07:39:51 +0000 (0:00:13.063) 0:06:35.759 ******** 2026-04-09 07:39:51.952936 | orchestrator | =============================================================================== 2026-04-09 07:39:51.952948 | orchestrator | prometheus : Gracefully stop Prometheus ------------------------------- 128.45s 2026-04-09 07:39:51.952959 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 24.76s 2026-04-09 07:39:51.952970 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 18.81s 2026-04-09 07:39:51.952981 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 18.72s 2026-04-09 07:39:51.952992 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 17.98s 2026-04-09 07:39:51.953003 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 15.18s 2026-04-09 07:39:51.953014 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 13.66s 2026-04-09 07:39:51.953025 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 13.50s 2026-04-09 07:39:51.953036 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 13.34s 2026-04-09 07:39:51.953047 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 13.06s 2026-04-09 07:39:51.953075 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 12.71s 2026-04-09 07:39:52.377493 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.80s 2026-04-09 07:39:52.377601 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 6.49s 2026-04-09 07:39:52.377625 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.37s 2026-04-09 07:39:52.377644 | orchestrator | prometheus : include_tasks ---------------------------------------------- 5.50s 2026-04-09 07:39:52.377663 | orchestrator | Group hosts based on enabled services ----------------------------------- 5.26s 2026-04-09 07:39:52.377681 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.84s 2026-04-09 07:39:52.377700 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.70s 2026-04-09 07:39:52.377793 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 4.15s 2026-04-09 07:39:52.377819 | orchestrator | prometheus : Flush handlers --------------------------------------------- 3.85s 2026-04-09 07:39:53.882846 | orchestrator | 2026-04-09 07:39:53 | INFO  | Prepare task for execution of grafana. 2026-04-09 07:39:53.949856 | orchestrator | 2026-04-09 07:39:53 | INFO  | Task 8827c538-cb76-4338-a76a-894a60e178ef (grafana) was prepared for execution. 2026-04-09 07:39:53.950169 | orchestrator | 2026-04-09 07:39:53 | INFO  | It takes a moment until task 8827c538-cb76-4338-a76a-894a60e178ef (grafana) has been started and output is visible here. 2026-04-09 07:40:17.791913 | orchestrator | 2026-04-09 07:40:17.792060 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 07:40:17.792082 | orchestrator | 2026-04-09 07:40:17.792094 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 07:40:17.792106 | orchestrator | Thursday 09 April 2026 07:39:59 +0000 (0:00:01.731) 0:00:01.731 ******** 2026-04-09 07:40:17.792118 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:40:17.792177 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:40:17.792211 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:40:17.792234 | orchestrator | 2026-04-09 07:40:17.792245 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 07:40:17.792257 | orchestrator | Thursday 09 April 2026 07:40:00 +0000 (0:00:01.698) 0:00:03.429 ******** 2026-04-09 07:40:17.792269 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-09 07:40:17.792282 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-09 07:40:17.792301 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-09 07:40:17.792329 | orchestrator | 2026-04-09 07:40:17.792349 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-09 07:40:17.792365 | orchestrator | 2026-04-09 07:40:17.792383 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-09 07:40:17.792400 | orchestrator | Thursday 09 April 2026 07:40:03 +0000 (0:00:02.278) 0:00:05.708 ******** 2026-04-09 07:40:17.792419 | orchestrator | included: /ansible/roles/grafana/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 07:40:17.792440 | orchestrator | 2026-04-09 07:40:17.792457 | orchestrator | TASK [grafana : Checking if Grafana container needs upgrading] ***************** 2026-04-09 07:40:17.792477 | orchestrator | Thursday 09 April 2026 07:40:05 +0000 (0:00:02.948) 0:00:08.656 ******** 2026-04-09 07:40:17.792496 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:40:17.792510 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:40:17.792523 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:40:17.792536 | orchestrator | 2026-04-09 07:40:17.792549 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-09 07:40:17.792563 | orchestrator | Thursday 09 April 2026 07:40:08 +0000 (0:00:03.011) 0:00:11.668 ******** 2026-04-09 07:40:17.792601 | orchestrator | ok: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:40:17.792655 | orchestrator | ok: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:40:17.792671 | orchestrator | ok: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:40:17.792684 | orchestrator | 2026-04-09 07:40:17.792726 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-09 07:40:17.792739 | orchestrator | Thursday 09 April 2026 07:40:11 +0000 (0:00:02.087) 0:00:13.756 ******** 2026-04-09 07:40:17.792753 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 07:40:17.792767 | orchestrator | 2026-04-09 07:40:17.792780 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-09 07:40:17.792820 | orchestrator | Thursday 09 April 2026 07:40:13 +0000 (0:00:02.302) 0:00:16.059 ******** 2026-04-09 07:40:17.792835 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 07:40:17.792846 | orchestrator | 2026-04-09 07:40:17.792857 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-09 07:40:17.792868 | orchestrator | Thursday 09 April 2026 07:40:15 +0000 (0:00:01.929) 0:00:17.988 ******** 2026-04-09 07:40:17.792879 | orchestrator | ok: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:40:17.792892 | orchestrator | ok: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:40:17.792919 | orchestrator | ok: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:40:17.792931 | orchestrator | 2026-04-09 07:40:17.792942 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-09 07:40:17.792953 | orchestrator | Thursday 09 April 2026 07:40:17 +0000 (0:00:02.201) 0:00:20.190 ******** 2026-04-09 07:40:17.792965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:40:17.792976 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:40:17.792998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:40:24.526548 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:40:24.526664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:40:24.526765 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:40:24.526781 | orchestrator | 2026-04-09 07:40:24.526794 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-09 07:40:24.526806 | orchestrator | Thursday 09 April 2026 07:40:18 +0000 (0:00:01.430) 0:00:21.620 ******** 2026-04-09 07:40:24.526833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:40:24.526846 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:40:24.526858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:40:24.526869 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:40:24.526881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:40:24.526892 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:40:24.526903 | orchestrator | 2026-04-09 07:40:24.526915 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-09 07:40:24.526926 | orchestrator | Thursday 09 April 2026 07:40:20 +0000 (0:00:01.715) 0:00:23.336 ******** 2026-04-09 07:40:24.526957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:40:24.526979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:40:24.526996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:40:24.527008 | orchestrator | 2026-04-09 07:40:24.527019 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-09 07:40:24.527030 | orchestrator | Thursday 09 April 2026 07:40:23 +0000 (0:00:02.391) 0:00:25.728 ******** 2026-04-09 07:40:24.527042 | orchestrator | ok: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:40:24.527055 | orchestrator | ok: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:40:24.527076 | orchestrator | ok: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:40:50.851399 | orchestrator | 2026-04-09 07:40:50.851542 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-09 07:40:50.851564 | orchestrator | Thursday 09 April 2026 07:40:25 +0000 (0:00:02.574) 0:00:28.302 ******** 2026-04-09 07:40:50.851577 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:40:50.851590 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:40:50.851601 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:40:50.851613 | orchestrator | 2026-04-09 07:40:50.851624 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-09 07:40:50.851636 | orchestrator | Thursday 09 April 2026 07:40:26 +0000 (0:00:01.367) 0:00:29.669 ******** 2026-04-09 07:40:50.851647 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-09 07:40:50.851658 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-09 07:40:50.851732 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-09 07:40:50.851743 | orchestrator | 2026-04-09 07:40:50.851755 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-09 07:40:50.851766 | orchestrator | Thursday 09 April 2026 07:40:29 +0000 (0:00:02.220) 0:00:31.890 ******** 2026-04-09 07:40:50.851779 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-09 07:40:50.851799 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-09 07:40:50.851817 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-09 07:40:50.851835 | orchestrator | 2026-04-09 07:40:50.851852 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-04-09 07:40:50.851870 | orchestrator | Thursday 09 April 2026 07:40:31 +0000 (0:00:02.217) 0:00:34.107 ******** 2026-04-09 07:40:50.851888 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 07:40:50.851906 | orchestrator | 2026-04-09 07:40:50.851945 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-04-09 07:40:50.851963 | orchestrator | Thursday 09 April 2026 07:40:33 +0000 (0:00:01.852) 0:00:35.959 ******** 2026-04-09 07:40:50.851981 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:40:50.852002 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:40:50.852021 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:40:50.852039 | orchestrator | 2026-04-09 07:40:50.852059 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-09 07:40:50.852080 | orchestrator | Thursday 09 April 2026 07:40:35 +0000 (0:00:02.006) 0:00:37.966 ******** 2026-04-09 07:40:50.852098 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:40:50.852118 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:40:50.852139 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:40:50.852156 | orchestrator | 2026-04-09 07:40:50.852175 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-04-09 07:40:50.852192 | orchestrator | Thursday 09 April 2026 07:40:37 +0000 (0:00:02.626) 0:00:40.592 ******** 2026-04-09 07:40:50.852216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:40:50.852266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:40:50.852314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:40:50.852337 | orchestrator | 2026-04-09 07:40:50.852355 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-04-09 07:40:50.852373 | orchestrator | Thursday 09 April 2026 07:40:40 +0000 (0:00:02.292) 0:00:42.885 ******** 2026-04-09 07:40:50.852393 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 07:40:50.852414 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:40:50.852432 | orchestrator | } 2026-04-09 07:40:50.852452 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 07:40:50.852470 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:40:50.852490 | orchestrator | } 2026-04-09 07:40:50.852509 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 07:40:50.852526 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:40:50.852544 | orchestrator | } 2026-04-09 07:40:50.852556 | orchestrator | 2026-04-09 07:40:50.852566 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 07:40:50.852577 | orchestrator | Thursday 09 April 2026 07:40:41 +0000 (0:00:01.402) 0:00:44.288 ******** 2026-04-09 07:40:50.852597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:40:50.852609 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:40:50.852621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:40:50.852644 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:40:50.852656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:40:50.852693 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:40:50.852705 | orchestrator | 2026-04-09 07:40:50.852716 | orchestrator | TASK [grafana : Stopping all Grafana instances but the first node] ************* 2026-04-09 07:40:50.852727 | orchestrator | Thursday 09 April 2026 07:40:43 +0000 (0:00:01.482) 0:00:45.771 ******** 2026-04-09 07:40:50.852738 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:40:50.852749 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:40:50.852760 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:40:50.852770 | orchestrator | 2026-04-09 07:40:50.852781 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-09 07:40:50.852792 | orchestrator | Thursday 09 April 2026 07:40:50 +0000 (0:00:07.052) 0:00:52.823 ******** 2026-04-09 07:40:50.852804 | orchestrator | 2026-04-09 07:40:50.852814 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-09 07:40:50.852825 | orchestrator | Thursday 09 April 2026 07:40:50 +0000 (0:00:00.450) 0:00:53.274 ******** 2026-04-09 07:40:50.852836 | orchestrator | 2026-04-09 07:40:50.852858 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-09 07:42:34.575356 | orchestrator | Thursday 09 April 2026 07:40:51 +0000 (0:00:00.590) 0:00:53.865 ******** 2026-04-09 07:42:34.575501 | orchestrator | 2026-04-09 07:42:34.575526 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-04-09 07:42:34.575546 | orchestrator | Thursday 09 April 2026 07:40:51 +0000 (0:00:00.790) 0:00:54.656 ******** 2026-04-09 07:42:34.575565 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:42:34.575585 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:42:34.575661 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:42:34.575681 | orchestrator | 2026-04-09 07:42:34.575700 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-04-09 07:42:34.575718 | orchestrator | Thursday 09 April 2026 07:41:31 +0000 (0:00:39.478) 0:01:34.134 ******** 2026-04-09 07:42:34.575737 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:42:34.575755 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:42:34.575773 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-04-09 07:42:34.575793 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-04-09 07:42:34.575811 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:42:34.575830 | orchestrator | 2026-04-09 07:42:34.575848 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-04-09 07:42:34.575866 | orchestrator | Thursday 09 April 2026 07:41:59 +0000 (0:00:27.632) 0:02:01.766 ******** 2026-04-09 07:42:34.575918 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:42:34.575938 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:42:34.575956 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:42:34.575975 | orchestrator | 2026-04-09 07:42:34.575994 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 07:42:34.576013 | orchestrator | testbed-node-0 : ok=19  changed=6  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 07:42:34.576052 | orchestrator | testbed-node-1 : ok=17  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 07:42:34.576071 | orchestrator | testbed-node-2 : ok=17  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 07:42:34.576090 | orchestrator | 2026-04-09 07:42:34.576108 | orchestrator | 2026-04-09 07:42:34.576126 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 07:42:34.576145 | orchestrator | Thursday 09 April 2026 07:42:34 +0000 (0:00:35.153) 0:02:36.920 ******** 2026-04-09 07:42:34.576163 | orchestrator | =============================================================================== 2026-04-09 07:42:34.576182 | orchestrator | grafana : Restart first grafana container ------------------------------ 39.48s 2026-04-09 07:42:34.576200 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 35.15s 2026-04-09 07:42:34.576219 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 27.63s 2026-04-09 07:42:34.576237 | orchestrator | grafana : Stopping all Grafana instances but the first node ------------- 7.05s 2026-04-09 07:42:34.576256 | orchestrator | grafana : Checking if Grafana container needs upgrading ----------------- 3.01s 2026-04-09 07:42:34.576275 | orchestrator | grafana : include_tasks ------------------------------------------------- 2.95s 2026-04-09 07:42:34.576293 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 2.63s 2026-04-09 07:42:34.576311 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 2.57s 2026-04-09 07:42:34.576330 | orchestrator | grafana : Copying over config.json files -------------------------------- 2.39s 2026-04-09 07:42:34.576348 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 2.30s 2026-04-09 07:42:34.576367 | orchestrator | service-check-containers : grafana | Check containers ------------------- 2.29s 2026-04-09 07:42:34.576385 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.28s 2026-04-09 07:42:34.576403 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 2.22s 2026-04-09 07:42:34.576421 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 2.22s 2026-04-09 07:42:34.576439 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 2.20s 2026-04-09 07:42:34.576457 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 2.09s 2026-04-09 07:42:34.576476 | orchestrator | grafana : Remove templated Grafana dashboards --------------------------- 2.01s 2026-04-09 07:42:34.576494 | orchestrator | grafana : include_tasks ------------------------------------------------- 1.93s 2026-04-09 07:42:34.576512 | orchestrator | grafana : Check if the folder for custom grafana dashboards exists ------ 1.85s 2026-04-09 07:42:34.576531 | orchestrator | grafana : Flush handlers ------------------------------------------------ 1.83s 2026-04-09 07:42:34.775348 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/510-clusterapi.sh 2026-04-09 07:42:34.782462 | orchestrator | + set -e 2026-04-09 07:42:34.782525 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 07:42:34.782541 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 07:42:34.782553 | orchestrator | ++ INTERACTIVE=false 2026-04-09 07:42:34.782564 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 07:42:34.782575 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 07:42:34.782587 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-09 07:42:34.783204 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-09 07:42:34.790182 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-09 07:42:34.790233 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-09 07:42:34.790804 | orchestrator | ++ semver 10.0.0 8.0.0 2026-04-09 07:42:34.839569 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-09 07:42:34.839680 | orchestrator | + osism apply clusterapi 2026-04-09 07:42:36.193266 | orchestrator | 2026-04-09 07:42:36 | INFO  | Prepare task for execution of clusterapi. 2026-04-09 07:42:36.261186 | orchestrator | 2026-04-09 07:42:36 | INFO  | Task 7401dad2-d0ba-4865-8db6-05c9b3381c97 (clusterapi) was prepared for execution. 2026-04-09 07:42:36.261310 | orchestrator | 2026-04-09 07:42:36 | INFO  | It takes a moment until task 7401dad2-d0ba-4865-8db6-05c9b3381c97 (clusterapi) has been started and output is visible here. 2026-04-09 07:43:28.977430 | orchestrator | 2026-04-09 07:43:28.977700 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-04-09 07:43:28.977721 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-09 07:43:28.977734 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-09 07:43:28.977757 | orchestrator | 2026-04-09 07:43:28.977768 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-04-09 07:43:28.977779 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-09 07:43:28.977790 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-09 07:43:28.977812 | orchestrator | Thursday 09 April 2026 07:42:40 +0000 (0:00:01.132) 0:00:01.132 ******** 2026-04-09 07:43:28.977824 | orchestrator | included: cert_manager for testbed-manager 2026-04-09 07:43:28.977836 | orchestrator | 2026-04-09 07:43:28.977847 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-04-09 07:43:28.977858 | orchestrator | Thursday 09 April 2026 07:42:41 +0000 (0:00:00.770) 0:00:01.902 ******** 2026-04-09 07:43:28.977870 | orchestrator | ok: [testbed-manager] 2026-04-09 07:43:28.977881 | orchestrator | 2026-04-09 07:43:28.977892 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-04-09 07:43:28.977921 | orchestrator | Thursday 09 April 2026 07:42:45 +0000 (0:00:03.556) 0:00:05.458 ******** 2026-04-09 07:43:28.977933 | orchestrator | ok: [testbed-manager] 2026-04-09 07:43:28.977944 | orchestrator | 2026-04-09 07:43:28.977955 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-04-09 07:43:28.977967 | orchestrator | 2026-04-09 07:43:28.977980 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-04-09 07:43:28.977993 | orchestrator | Thursday 09 April 2026 07:42:49 +0000 (0:00:03.959) 0:00:09.417 ******** 2026-04-09 07:43:28.978005 | orchestrator | ok: [testbed-manager] 2026-04-09 07:43:28.978080 | orchestrator | 2026-04-09 07:43:28.978098 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-04-09 07:43:28.978111 | orchestrator | Thursday 09 April 2026 07:42:50 +0000 (0:00:01.198) 0:00:10.616 ******** 2026-04-09 07:43:28.978123 | orchestrator | ok: [testbed-manager] 2026-04-09 07:43:28.978136 | orchestrator | 2026-04-09 07:43:28.978150 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-04-09 07:43:28.978198 | orchestrator | Thursday 09 April 2026 07:42:50 +0000 (0:00:00.283) 0:00:10.900 ******** 2026-04-09 07:43:28.978213 | orchestrator | skipping: [testbed-manager] 2026-04-09 07:43:28.978227 | orchestrator | 2026-04-09 07:43:28.978239 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-04-09 07:43:28.978252 | orchestrator | Thursday 09 April 2026 07:42:50 +0000 (0:00:00.146) 0:00:11.047 ******** 2026-04-09 07:43:28.978265 | orchestrator | ok: [testbed-manager] 2026-04-09 07:43:28.978278 | orchestrator | 2026-04-09 07:43:28.978291 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-04-09 07:43:28.978327 | orchestrator | Thursday 09 April 2026 07:43:25 +0000 (0:00:35.050) 0:00:46.097 ******** 2026-04-09 07:43:28.978339 | orchestrator | changed: [testbed-manager] 2026-04-09 07:43:28.978351 | orchestrator | 2026-04-09 07:43:28.978362 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 07:43:28.978374 | orchestrator | testbed-manager : ok=7  changed=1  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 07:43:28.978386 | orchestrator | 2026-04-09 07:43:28.978397 | orchestrator | 2026-04-09 07:43:28.978408 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 07:43:28.978419 | orchestrator | Thursday 09 April 2026 07:43:28 +0000 (0:00:02.786) 0:00:48.883 ******** 2026-04-09 07:43:28.978430 | orchestrator | =============================================================================== 2026-04-09 07:43:28.978441 | orchestrator | Upgrade the CAPI management cluster ------------------------------------ 35.05s 2026-04-09 07:43:28.978452 | orchestrator | cert_manager : Deploy cert-manager -------------------------------------- 3.96s 2026-04-09 07:43:28.978462 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 3.56s 2026-04-09 07:43:28.978473 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.79s 2026-04-09 07:43:28.978484 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.20s 2026-04-09 07:43:28.978495 | orchestrator | Include cert_manager role ----------------------------------------------- 0.77s 2026-04-09 07:43:28.978506 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.28s 2026-04-09 07:43:28.978517 | orchestrator | Initialize the CAPI management cluster ---------------------------------- 0.15s 2026-04-09 07:43:29.184049 | orchestrator | + osism apply -a upgrade magnum 2026-04-09 07:43:30.493472 | orchestrator | 2026-04-09 07:43:30 | INFO  | Prepare task for execution of magnum. 2026-04-09 07:43:30.557131 | orchestrator | 2026-04-09 07:43:30 | INFO  | Task c8266ebc-0d2e-4a96-8f98-2b885540fe5a (magnum) was prepared for execution. 2026-04-09 07:43:30.557196 | orchestrator | 2026-04-09 07:43:30 | INFO  | It takes a moment until task c8266ebc-0d2e-4a96-8f98-2b885540fe5a (magnum) has been started and output is visible here. 2026-04-09 07:43:43.179157 | orchestrator | 2026-04-09 07:43:43.179275 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 07:43:43.179294 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-09 07:43:43.179307 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-09 07:43:43.179330 | orchestrator | 2026-04-09 07:43:43.179342 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 07:43:43.179353 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-09 07:43:43.179364 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-09 07:43:43.179386 | orchestrator | Thursday 09 April 2026 07:43:35 +0000 (0:00:01.108) 0:00:01.108 ******** 2026-04-09 07:43:43.179397 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:43:43.179409 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:43:43.179420 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:43:43.179431 | orchestrator | 2026-04-09 07:43:43.179442 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 07:43:43.179453 | orchestrator | Thursday 09 April 2026 07:43:36 +0000 (0:00:00.941) 0:00:02.050 ******** 2026-04-09 07:43:43.179463 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-09 07:43:43.179476 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-09 07:43:43.179487 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-09 07:43:43.179497 | orchestrator | 2026-04-09 07:43:43.179529 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-09 07:43:43.179606 | orchestrator | 2026-04-09 07:43:43.179619 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-09 07:43:43.179646 | orchestrator | Thursday 09 April 2026 07:43:36 +0000 (0:00:00.798) 0:00:02.849 ******** 2026-04-09 07:43:43.179657 | orchestrator | included: /ansible/roles/magnum/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 07:43:43.179669 | orchestrator | 2026-04-09 07:43:43.179680 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-09 07:43:43.179690 | orchestrator | Thursday 09 April 2026 07:43:38 +0000 (0:00:01.317) 0:00:04.167 ******** 2026-04-09 07:43:43.179708 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:43:43.179724 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:43:43.179755 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:43:43.179773 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 07:43:43.179796 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 07:43:43.179808 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 07:43:43.179820 | orchestrator | 2026-04-09 07:43:43.179831 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-09 07:43:43.179842 | orchestrator | Thursday 09 April 2026 07:43:40 +0000 (0:00:02.020) 0:00:06.187 ******** 2026-04-09 07:43:43.179853 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:43:43.179864 | orchestrator | 2026-04-09 07:43:43.179876 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-09 07:43:43.179886 | orchestrator | Thursday 09 April 2026 07:43:40 +0000 (0:00:00.134) 0:00:06.322 ******** 2026-04-09 07:43:43.179897 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:43:43.179908 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:43:43.179919 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:43:43.179930 | orchestrator | 2026-04-09 07:43:43.179941 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-09 07:43:43.179952 | orchestrator | Thursday 09 April 2026 07:43:40 +0000 (0:00:00.336) 0:00:06.658 ******** 2026-04-09 07:43:43.179963 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 07:43:43.179973 | orchestrator | 2026-04-09 07:43:43.179984 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-09 07:43:43.179995 | orchestrator | Thursday 09 April 2026 07:43:41 +0000 (0:00:01.142) 0:00:07.800 ******** 2026-04-09 07:43:43.180016 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:43:47.921522 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:43:47.921691 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:43:47.921711 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 07:43:47.921725 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 07:43:47.921789 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 07:43:47.921806 | orchestrator | 2026-04-09 07:43:47.921819 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-09 07:43:47.921833 | orchestrator | Thursday 09 April 2026 07:43:44 +0000 (0:00:02.573) 0:00:10.373 ******** 2026-04-09 07:43:47.921852 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:43:47.921879 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:43:47.921903 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:43:47.921919 | orchestrator | 2026-04-09 07:43:47.921938 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-09 07:43:47.921966 | orchestrator | Thursday 09 April 2026 07:43:44 +0000 (0:00:00.326) 0:00:10.700 ******** 2026-04-09 07:43:47.921985 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 07:43:47.922002 | orchestrator | 2026-04-09 07:43:47.922100 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-09 07:43:47.922128 | orchestrator | Thursday 09 April 2026 07:43:45 +0000 (0:00:01.142) 0:00:11.843 ******** 2026-04-09 07:43:47.922150 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:43:47.922174 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:43:47.922195 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:43:47.922257 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 07:43:49.327405 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 07:43:49.327520 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 07:43:49.327610 | orchestrator | 2026-04-09 07:43:49.327626 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-09 07:43:49.327640 | orchestrator | Thursday 09 April 2026 07:43:48 +0000 (0:00:02.277) 0:00:14.121 ******** 2026-04-09 07:43:49.327657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:43:49.327695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 07:43:49.327708 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:43:49.327756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:43:49.327771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:43:49.327784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 07:43:49.327805 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:43:49.327817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 07:43:49.327828 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:43:49.327840 | orchestrator | 2026-04-09 07:43:49.327851 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-09 07:43:49.327862 | orchestrator | Thursday 09 April 2026 07:43:48 +0000 (0:00:00.773) 0:00:14.895 ******** 2026-04-09 07:43:49.327886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:43:52.557502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:43:52.557753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 07:43:52.557821 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:43:52.557843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 07:43:52.557861 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:43:52.557880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:43:52.557943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 07:43:52.557962 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:43:52.557980 | orchestrator | 2026-04-09 07:43:52.557998 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-09 07:43:52.558131 | orchestrator | Thursday 09 April 2026 07:43:50 +0000 (0:00:01.375) 0:00:16.271 ******** 2026-04-09 07:43:52.558158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:43:52.558194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:43:52.558214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:43:52.558255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 07:43:58.633700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 07:43:58.633864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 07:43:58.633883 | orchestrator | 2026-04-09 07:43:58.633897 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-09 07:43:58.633910 | orchestrator | Thursday 09 April 2026 07:43:52 +0000 (0:00:02.440) 0:00:18.711 ******** 2026-04-09 07:43:58.633925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:43:58.633954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:43:58.633988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:43:58.634011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 07:43:58.634094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 07:43:58.634106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 07:43:58.634118 | orchestrator | 2026-04-09 07:43:58.634129 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-09 07:43:58.634141 | orchestrator | Thursday 09 April 2026 07:43:58 +0000 (0:00:05.461) 0:00:24.172 ******** 2026-04-09 07:43:58.634171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:44:01.993622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 07:44:01.993792 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:44:01.993821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:44:01.993835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 07:44:01.993847 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:44:01.993875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:44:01.993941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 07:44:01.993966 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:44:01.993978 | orchestrator | 2026-04-09 07:44:01.993990 | orchestrator | TASK [service-check-containers : magnum | Check containers] ******************** 2026-04-09 07:44:01.994004 | orchestrator | Thursday 09 April 2026 07:43:59 +0000 (0:00:01.414) 0:00:25.587 ******** 2026-04-09 07:44:01.994076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:44:01.994096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:44:01.994118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 07:44:01.994145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 07:44:26.870595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 07:44:26.870711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 07:44:26.870725 | orchestrator | 2026-04-09 07:44:26.870735 | orchestrator | TASK [service-check-containers : magnum | Notify handlers to restart containers] *** 2026-04-09 07:44:26.870745 | orchestrator | Thursday 09 April 2026 07:44:02 +0000 (0:00:02.703) 0:00:28.290 ******** 2026-04-09 07:44:26.870756 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 07:44:26.870766 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:44:26.870774 | orchestrator | } 2026-04-09 07:44:26.870783 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 07:44:26.870792 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:44:26.870800 | orchestrator | } 2026-04-09 07:44:26.870808 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 07:44:26.870817 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 07:44:26.870826 | orchestrator | } 2026-04-09 07:44:26.870834 | orchestrator | 2026-04-09 07:44:26.870843 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 07:44:26.870851 | orchestrator | Thursday 09 April 2026 07:44:02 +0000 (0:00:00.369) 0:00:28.660 ******** 2026-04-09 07:44:26.870881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:44:26.870915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 07:44:26.870925 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:44:26.870953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:44:26.870963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 07:44:26.870972 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:44:26.870981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 07:44:26.870996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 07:44:26.871013 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:44:26.871022 | orchestrator | 2026-04-09 07:44:26.871030 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-09 07:44:26.871039 | orchestrator | Thursday 09 April 2026 07:44:04 +0000 (0:00:01.385) 0:00:30.046 ******** 2026-04-09 07:44:26.871048 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:44:26.871056 | orchestrator | 2026-04-09 07:44:26.871066 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-09 07:44:26.871075 | orchestrator | Thursday 09 April 2026 07:44:26 +0000 (0:00:22.759) 0:00:52.805 ******** 2026-04-09 07:44:26.871084 | orchestrator | 2026-04-09 07:44:26.871093 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-09 07:44:26.871109 | orchestrator | Thursday 09 April 2026 07:44:26 +0000 (0:00:00.083) 0:00:52.889 ******** 2026-04-09 07:45:23.548126 | orchestrator | 2026-04-09 07:45:23.548271 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-09 07:45:23.548300 | orchestrator | Thursday 09 April 2026 07:44:26 +0000 (0:00:00.075) 0:00:52.964 ******** 2026-04-09 07:45:23.548320 | orchestrator | 2026-04-09 07:45:23.548340 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-09 07:45:23.548352 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-09 07:45:23.548363 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-09 07:45:23.548386 | orchestrator | Thursday 09 April 2026 07:44:27 +0000 (0:00:00.076) 0:00:53.041 ******** 2026-04-09 07:45:23.548397 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:45:23.548409 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:45:23.548420 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:45:23.548431 | orchestrator | 2026-04-09 07:45:23.548442 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-09 07:45:23.548453 | orchestrator | Thursday 09 April 2026 07:44:49 +0000 (0:00:22.398) 0:01:15.440 ******** 2026-04-09 07:45:23.548520 | orchestrator | changed: [testbed-node-2] 2026-04-09 07:45:23.548533 | orchestrator | changed: [testbed-node-1] 2026-04-09 07:45:23.548544 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:45:23.548555 | orchestrator | 2026-04-09 07:45:23.548566 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 07:45:23.548579 | orchestrator | testbed-node-0 : ok=16  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 07:45:23.548592 | orchestrator | testbed-node-1 : ok=14  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 07:45:23.548603 | orchestrator | testbed-node-2 : ok=14  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 07:45:23.548614 | orchestrator | 2026-04-09 07:45:23.548625 | orchestrator | 2026-04-09 07:45:23.548636 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 07:45:23.548648 | orchestrator | Thursday 09 April 2026 07:45:23 +0000 (0:00:33.857) 0:01:49.298 ******** 2026-04-09 07:45:23.548660 | orchestrator | =============================================================================== 2026-04-09 07:45:23.548704 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 33.86s 2026-04-09 07:45:23.548719 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 22.76s 2026-04-09 07:45:23.548731 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 22.40s 2026-04-09 07:45:23.548745 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.46s 2026-04-09 07:45:23.548757 | orchestrator | service-check-containers : magnum | Check containers -------------------- 2.70s 2026-04-09 07:45:23.548770 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.57s 2026-04-09 07:45:23.548783 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.44s 2026-04-09 07:45:23.548795 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.28s 2026-04-09 07:45:23.548808 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.02s 2026-04-09 07:45:23.548821 | orchestrator | magnum : Copying over existing policy file ------------------------------ 1.41s 2026-04-09 07:45:23.548834 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.39s 2026-04-09 07:45:23.548846 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 1.38s 2026-04-09 07:45:23.548858 | orchestrator | magnum : include_tasks -------------------------------------------------- 1.32s 2026-04-09 07:45:23.548871 | orchestrator | magnum : include_tasks -------------------------------------------------- 1.14s 2026-04-09 07:45:23.548884 | orchestrator | magnum : Check if kubeconfig file is supplied --------------------------- 1.14s 2026-04-09 07:45:23.548896 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.94s 2026-04-09 07:45:23.548924 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.80s 2026-04-09 07:45:23.548938 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS certificate --- 0.77s 2026-04-09 07:45:23.548950 | orchestrator | service-check-containers : magnum | Notify handlers to restart containers --- 0.37s 2026-04-09 07:45:23.548964 | orchestrator | magnum : Set magnum policy file ----------------------------------------- 0.34s 2026-04-09 07:45:24.332594 | orchestrator | ok: Runtime: 3:26:40.674044 2026-04-09 07:45:24.787758 | 2026-04-09 07:45:24.787964 | TASK [Bootstrap services] 2026-04-09 07:45:25.326616 | orchestrator | skipping: Conditional result was False 2026-04-09 07:45:25.355249 | 2026-04-09 07:45:25.355412 | TASK [Run checks after the upgrade] 2026-04-09 07:45:26.110192 | orchestrator | + set -e 2026-04-09 07:45:26.110494 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 07:45:26.110536 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 07:45:26.110570 | orchestrator | ++ INTERACTIVE=false 2026-04-09 07:45:26.110592 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 07:45:26.110613 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 07:45:26.110654 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-09 07:45:26.111806 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-09 07:45:26.117757 | orchestrator | 2026-04-09 07:45:26.117828 | orchestrator | # CHECK 2026-04-09 07:45:26.117843 | orchestrator | 2026-04-09 07:45:26.117857 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-09 07:45:26.117874 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-09 07:45:26.117886 | orchestrator | + echo 2026-04-09 07:45:26.117898 | orchestrator | + echo '# CHECK' 2026-04-09 07:45:26.117909 | orchestrator | + echo 2026-04-09 07:45:26.117924 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-09 07:45:26.118650 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-09 07:45:26.183719 | orchestrator | 2026-04-09 07:45:26.183820 | orchestrator | ## Containers @ testbed-manager 2026-04-09 07:45:26.183836 | orchestrator | 2026-04-09 07:45:26.183850 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-09 07:45:26.183862 | orchestrator | + echo 2026-04-09 07:45:26.183874 | orchestrator | + echo '## Containers @ testbed-manager' 2026-04-09 07:45:26.183886 | orchestrator | + echo 2026-04-09 07:45:26.183897 | orchestrator | + osism container testbed-manager ps 2026-04-09 07:45:27.652006 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-09 07:45:27.652137 | orchestrator | 88a89efa92a3 registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328 "dumb-init --single-…" 5 minutes ago Up 5 minutes prometheus_blackbox_exporter 2026-04-09 07:45:27.652162 | orchestrator | f497f37500f7 registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_alertmanager 2026-04-09 07:45:27.652175 | orchestrator | 02bf44b3089d registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_cadvisor 2026-04-09 07:45:27.652204 | orchestrator | bac53f2c35f0 registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_node_exporter 2026-04-09 07:45:27.652227 | orchestrator | e31b1993c984 registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_server 2026-04-09 07:45:27.652239 | orchestrator | 976ad4dc5894 registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours cron 2026-04-09 07:45:27.652256 | orchestrator | e42e3a747fdb registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours kolla_toolbox 2026-04-09 07:45:27.652268 | orchestrator | 40e0de3e190f registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours fluentd 2026-04-09 07:45:27.652304 | orchestrator | df49d8d159dc registry.osism.tech/osism/openstackclient:2025.1 "/usr/bin/dumb-init …" 3 hours ago Up 3 hours openstackclient 2026-04-09 07:45:27.652317 | orchestrator | a3454373a043 registry.osism.tech/osism/inventory-reconciler:0.20260322.0 "/sbin/tini -- /entr…" 3 hours ago Up 3 hours (healthy) manager-inventory_reconciler-1 2026-04-09 07:45:27.652328 | orchestrator | b0e9da6dc445 registry.osism.tech/osism/osism-ansible:0.20260322.0 "/entrypoint.sh osis…" 3 hours ago Up 3 hours (healthy) osism-ansible 2026-04-09 07:45:27.652340 | orchestrator | 6227ba06136b registry.osism.tech/osism/kolla-ansible:0.20260328.0 "/entrypoint.sh osis…" 3 hours ago Up 3 hours (healthy) kolla-ansible 2026-04-09 07:45:27.652351 | orchestrator | bbfc7ad449dc registry.osism.tech/osism/osism-kubernetes:0.20260322.0 "/entrypoint.sh osis…" 3 hours ago Up 3 hours (healthy) osism-kubernetes 2026-04-09 07:45:27.652387 | orchestrator | 3834cd8bb854 registry.osism.tech/osism/ceph-ansible:0.20260322.0 "/entrypoint.sh osis…" 3 hours ago Up 3 hours (healthy) ceph-ansible 2026-04-09 07:45:27.652400 | orchestrator | 37980de54ae7 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- sleep…" 3 hours ago Up 3 hours (healthy) osismclient 2026-04-09 07:45:27.652412 | orchestrator | c583a3b96646 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up 3 hours (healthy) manager-openstack-1 2026-04-09 07:45:27.652423 | orchestrator | 8ac0dcb1408c registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up About an hour (healthy) manager-listener-1 2026-04-09 07:45:27.652435 | orchestrator | 1467962aef51 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up 3 hours (healthy) manager-flower-1 2026-04-09 07:45:27.652446 | orchestrator | 270c8abde6a1 registry.osism.tech/osism/osism-frontend:0.20260320.0 "docker-entrypoint.s…" 3 hours ago Up 3 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-04-09 07:45:27.652481 | orchestrator | 84526e0b9781 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up 3 hours (healthy) manager-beat-1 2026-04-09 07:45:27.652492 | orchestrator | 45cee22e422b registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up 3 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-04-09 07:45:27.652503 | orchestrator | 5340c79f7457 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 5 hours ago Up 5 hours cephclient 2026-04-09 07:45:27.652523 | orchestrator | 206db9928f06 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 5 hours ago Up 5 hours (healthy) 80/tcp phpmyadmin 2026-04-09 07:45:27.652535 | orchestrator | b4500a1bc2c9 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 5 hours ago Up 5 hours (healthy) 8080/tcp homer 2026-04-09 07:45:27.652546 | orchestrator | 394158c29096 registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 5 hours ago Up 5 hours 80/tcp cgit 2026-04-09 07:45:27.652557 | orchestrator | 9a9dccf2d25e registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 6 hours ago Up 6 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-04-09 07:45:27.652568 | orchestrator | 558dff013697 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 6 hours ago Up 3 hours (healthy) 8000/tcp manager-ara-server-1 2026-04-09 07:45:27.652585 | orchestrator | a4326919cc32 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 6 hours ago Up 3 hours (healthy) 6379/tcp manager-redis-1 2026-04-09 07:45:27.652596 | orchestrator | 4b9901ad3bd7 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 6 hours ago Up 3 hours (healthy) 3306/tcp manager-mariadb-1 2026-04-09 07:45:27.652614 | orchestrator | 4ff1970252fa registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 6 hours ago Up 6 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-04-09 07:45:27.809381 | orchestrator | 2026-04-09 07:45:27.809514 | orchestrator | ## Images @ testbed-manager 2026-04-09 07:45:27.809532 | orchestrator | 2026-04-09 07:45:27.809545 | orchestrator | + echo 2026-04-09 07:45:27.809557 | orchestrator | + echo '## Images @ testbed-manager' 2026-04-09 07:45:27.809569 | orchestrator | + echo 2026-04-09 07:45:27.809580 | orchestrator | + osism container testbed-manager images 2026-04-09 07:45:29.303850 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-09 07:45:29.304010 | orchestrator | registry.osism.tech/osism/openstackclient 2025.1 c816b5cd7a47 4 hours ago 212MB 2026-04-09 07:45:29.304027 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 7cc0762d03ae 28 hours ago 239MB 2026-04-09 07:45:29.304042 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20260328.0 38f6ca42e9a0 9 days ago 635MB 2026-04-09 07:45:29.304053 | orchestrator | registry.osism.tech/kolla/release/2025.1/fluentd 5.0.9.20260328 e1596a0c11a4 11 days ago 590MB 2026-04-09 07:45:29.304064 | orchestrator | registry.osism.tech/kolla/release/2025.1/kolla-toolbox 20.3.1.20260328 28c0d33bbf93 11 days ago 683MB 2026-04-09 07:45:29.304075 | orchestrator | registry.osism.tech/kolla/release/2025.1/cron 3.0.20260328 83ceba86723e 11 days ago 277MB 2026-04-09 07:45:29.304085 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter 0.25.0.20260328 1bf017fd7bf3 11 days ago 319MB 2026-04-09 07:45:29.304143 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager 0.28.1.20260328 d1986023a383 11 days ago 415MB 2026-04-09 07:45:29.304163 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor 0.49.2.20260328 f7140e8a13d8 11 days ago 368MB 2026-04-09 07:45:29.304182 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-server 3.2.1.20260328 4f5732d5eb69 11 days ago 860MB 2026-04-09 07:45:29.304200 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter 1.8.2.20260328 4d11b36c2bda 11 days ago 317MB 2026-04-09 07:45:29.304220 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20260322.0 3e18c5de9bc5 2 weeks ago 634MB 2026-04-09 07:45:29.304243 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20260322.0 c68c1f5728ae 2 weeks ago 1.24GB 2026-04-09 07:45:29.304261 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20260322.0 f6e7e0d58bb1 2 weeks ago 585MB 2026-04-09 07:45:29.304280 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20260322.0 9806642932fd 2 weeks ago 357MB 2026-04-09 07:45:29.304295 | orchestrator | registry.osism.tech/osism/osism 0.20260320.0 5d0420989a40 2 weeks ago 408MB 2026-04-09 07:45:29.304307 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20260320.0 80b833af5991 2 weeks ago 232MB 2026-04-09 07:45:29.304318 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-04-09 07:45:29.304346 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 months ago 11.5MB 2026-04-09 07:45:29.304365 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 4 months ago 608MB 2026-04-09 07:45:29.304385 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-09 07:45:29.304405 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-09 07:45:29.304424 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-09 07:45:29.304497 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 4 months ago 308MB 2026-04-09 07:45:29.304520 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-09 07:45:29.304538 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 4 months ago 404MB 2026-04-09 07:45:29.304558 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 4 months ago 839MB 2026-04-09 07:45:29.304571 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-09 07:45:29.304582 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 4 months ago 330MB 2026-04-09 07:45:29.304593 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 4 months ago 613MB 2026-04-09 07:45:29.304604 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 4 months ago 560MB 2026-04-09 07:45:29.304644 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 4 months ago 1.23GB 2026-04-09 07:45:29.304656 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 4 months ago 383MB 2026-04-09 07:45:29.304667 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 4 months ago 238MB 2026-04-09 07:45:29.304690 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-04-09 07:45:29.304701 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 6 months ago 742MB 2026-04-09 07:45:29.304712 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-04-09 07:45:29.304722 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-04-09 07:45:29.304733 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 11 months ago 453MB 2026-04-09 07:45:29.304744 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 22 months ago 146MB 2026-04-09 07:45:29.304755 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-04-09 07:45:29.461569 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-09 07:45:29.462226 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-09 07:45:29.507249 | orchestrator | 2026-04-09 07:45:29.507350 | orchestrator | ## Containers @ testbed-node-0 2026-04-09 07:45:29.507366 | orchestrator | 2026-04-09 07:45:29.507377 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-09 07:45:29.507389 | orchestrator | + echo 2026-04-09 07:45:29.507401 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-04-09 07:45:29.507413 | orchestrator | + echo 2026-04-09 07:45:29.507424 | orchestrator | + osism container testbed-node-0 ps 2026-04-09 07:45:30.998554 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-09 07:45:30.999290 | orchestrator | 51207dde872d registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328 "dumb-init --single-…" 9 seconds ago Up 8 seconds (health: starting) magnum_conductor 2026-04-09 07:45:30.999378 | orchestrator | 17421c78ab3e registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328 "dumb-init --single-…" 54 seconds ago Up 53 seconds (healthy) magnum_api 2026-04-09 07:45:30.999399 | orchestrator | b9fa68dd8ae4 registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328 "dumb-init --single-…" 4 minutes ago Up 4 minutes grafana 2026-04-09 07:45:30.999435 | orchestrator | 0df3b9a2d9d6 registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_elasticsearch_exporter 2026-04-09 07:45:30.999492 | orchestrator | 52eb7ec345ef registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_cadvisor 2026-04-09 07:45:30.999513 | orchestrator | df3e7bf3a9fb registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_memcached_exporter 2026-04-09 07:45:30.999532 | orchestrator | 78a7b901e94b registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_mysqld_exporter 2026-04-09 07:45:30.999551 | orchestrator | f05eddc311af registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_node_exporter 2026-04-09 07:45:30.999568 | orchestrator | 3a6b6d65cff7 registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_share 2026-04-09 07:45:30.999611 | orchestrator | e73d08a5e232 registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-04-09 07:45:30.999637 | orchestrator | 2587319d248c registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-04-09 07:45:30.999657 | orchestrator | 7e04dae667dc registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-04-09 07:45:30.999673 | orchestrator | f8d55d7615bf registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) octavia_worker 2026-04-09 07:45:30.999690 | orchestrator | 3dbf39b9babc registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) octavia_housekeeping 2026-04-09 07:45:30.999706 | orchestrator | dcf38ae81820 registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) octavia_health_manager 2026-04-09 07:45:30.999724 | orchestrator | 126945d27051 registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328 "dumb-init --single-…" 18 minutes ago Up 18 minutes octavia_driver_agent 2026-04-09 07:45:30.999741 | orchestrator | 4ccc108a876b registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) octavia_api 2026-04-09 07:45:30.999784 | orchestrator | 8b0b110e3e79 registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) aodh_notifier 2026-04-09 07:45:30.999801 | orchestrator | 9f4bf7c0ba6a registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) aodh_listener 2026-04-09 07:45:30.999818 | orchestrator | 779eb3d24045 registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) aodh_evaluator 2026-04-09 07:45:30.999835 | orchestrator | c4e3fa5e15ba registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) aodh_api 2026-04-09 07:45:30.999853 | orchestrator | 2b009c129110 registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328 "dumb-init --single-…" 24 minutes ago Up 24 minutes ceilometer_central 2026-04-09 07:45:30.999870 | orchestrator | ce684e1a63ec registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) ceilometer_notification 2026-04-09 07:45:30.999887 | orchestrator | 69e46f2b4584 registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-04-09 07:45:30.999913 | orchestrator | 11fa06258a36 registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-04-09 07:45:30.999933 | orchestrator | 5dbd782b00a6 registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-04-09 07:45:30.999966 | orchestrator | 128dbec236ed registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_central 2026-04-09 07:45:30.999986 | orchestrator | c992f2a2aae1 registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_api 2026-04-09 07:45:31 | orchestrator | b7eb4248d931 registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_backend_bind9 2026-04-09 07:45:31.000012 | orchestrator | 9aaad4d37b03 registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_worker 2026-04-09 07:45:31.000023 | orchestrator | 3fb424c62693 registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_keystone_listener 2026-04-09 07:45:31.000034 | orchestrator | 2c496fb2f514 registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-04-09 07:45:31.000045 | orchestrator | 937f3a13844f registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 34 minutes (healthy) cinder_backup 2026-04-09 07:45:31.000056 | orchestrator | 78e6fba2bea9 registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 34 minutes (healthy) cinder_volume 2026-04-09 07:45:31.000066 | orchestrator | 8bd8d55e426e registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328 "dumb-init --single-…" 36 minutes ago Up 34 minutes (healthy) cinder_scheduler 2026-04-09 07:45:31.000077 | orchestrator | 15d283af4473 registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328 "dumb-init --single-…" 37 minutes ago Up 35 minutes (healthy) cinder_api 2026-04-09 07:45:31.000094 | orchestrator | 4a6310586841 registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) glance_api 2026-04-09 07:45:31.000119 | orchestrator | 68ad8b201f19 registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) skyline_console 2026-04-09 07:45:31.000131 | orchestrator | e991144c1f02 registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) skyline_apiserver 2026-04-09 07:45:31.000142 | orchestrator | 3ab5bd837668 registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) horizon 2026-04-09 07:45:31.000153 | orchestrator | b93c74b27c92 registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328 "dumb-init --single-…" 58 minutes ago Up 48 minutes (healthy) nova_novncproxy 2026-04-09 07:45:31.000164 | orchestrator | dd3f1357438f registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328 "dumb-init --single-…" 59 minutes ago Up 48 minutes (healthy) nova_conductor 2026-04-09 07:45:31.000175 | orchestrator | 6ef6bcdf9479 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) nova_metadata 2026-04-09 07:45:31.000193 | orchestrator | 6e1d8aacb905 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 48 minutes (healthy) nova_api 2026-04-09 07:45:31.000204 | orchestrator | b5997c91f9f9 registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 48 minutes (healthy) nova_scheduler 2026-04-09 07:45:31.000215 | orchestrator | fa39e56cae09 registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) neutron_server 2026-04-09 07:45:31.000226 | orchestrator | f8ddb299a34b registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) placement_api 2026-04-09 07:45:31.000237 | orchestrator | be66b4a4b381 registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone 2026-04-09 07:45:31.000248 | orchestrator | 93c5dabc7133 registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_fernet 2026-04-09 07:45:31.000259 | orchestrator | f1a4c027e215 registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_ssh 2026-04-09 07:45:31.000270 | orchestrator | 2d8db497aa1d registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-04-09 07:45:31.000281 | orchestrator | 4b66763606eb registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 2 hours ago Up 2 hours ceph-mgr-testbed-node-0 2026-04-09 07:45:31.000292 | orchestrator | 69d38aa54653 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 2 hours ago Up 2 hours ceph-mon-testbed-node-0 2026-04-09 07:45:31.000303 | orchestrator | e7f3c9c0201a registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_northd 2026-04-09 07:45:31.000315 | orchestrator | be92a78a82a6 registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_sb_db_relay_1 2026-04-09 07:45:31.000326 | orchestrator | af700aaff7b4 registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_sb_db 2026-04-09 07:45:31.000344 | orchestrator | fd2448af25d4 registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_nb_db 2026-04-09 07:45:31.000356 | orchestrator | a4418dc36b9a registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_controller 2026-04-09 07:45:31.000367 | orchestrator | 1b2d8d1b7c60 registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) openvswitch_vswitchd 2026-04-09 07:45:31.000378 | orchestrator | 2fc6c5b35f79 registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) openvswitch_db 2026-04-09 07:45:31.000396 | orchestrator | de461dc1cca0 registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) rabbitmq 2026-04-09 07:45:31.000407 | orchestrator | fd369469c702 registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328 "dumb-init -- kolla_…" 3 hours ago Up 3 hours (healthy) mariadb 2026-04-09 07:45:31.000418 | orchestrator | 681dfd181394 registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) redis_sentinel 2026-04-09 07:45:31.000430 | orchestrator | 5c9446531019 registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) redis 2026-04-09 07:45:31.000441 | orchestrator | 5b3ec2ff373e registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) memcached 2026-04-09 07:45:31.000486 | orchestrator | 6e58c8902728 registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) opensearch_dashboards 2026-04-09 07:45:31.000506 | orchestrator | 2aa81444572c registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) opensearch 2026-04-09 07:45:31.000518 | orchestrator | d5cf6690e324 registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours keepalived 2026-04-09 07:45:31.000529 | orchestrator | 705826d3a709 registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) proxysql 2026-04-09 07:45:31.000540 | orchestrator | 1fed8ef905fd registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) haproxy 2026-04-09 07:45:31.000551 | orchestrator | 265fdb13870a registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours cron 2026-04-09 07:45:31.000562 | orchestrator | 506ede021ae2 registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours kolla_toolbox 2026-04-09 07:45:31.000573 | orchestrator | aed22a87bd51 registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours fluentd 2026-04-09 07:45:31.146703 | orchestrator | 2026-04-09 07:45:31.146800 | orchestrator | ## Images @ testbed-node-0 2026-04-09 07:45:31.146813 | orchestrator | 2026-04-09 07:45:31.146822 | orchestrator | + echo 2026-04-09 07:45:31.146831 | orchestrator | + echo '## Images @ testbed-node-0' 2026-04-09 07:45:31.146840 | orchestrator | + echo 2026-04-09 07:45:31.146849 | orchestrator | + osism container testbed-node-0 images 2026-04-09 07:45:32.767578 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-09 07:45:32.767674 | orchestrator | registry.osism.tech/kolla/release/2025.1/keepalived 2.2.8.20260328 cc29bd9a85e4 11 days ago 288MB 2026-04-09 07:45:32.767687 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch-dashboards 2.19.5.20260328 f834ead10f11 11 days ago 1.54GB 2026-04-09 07:45:32.767718 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch 2.19.5.20260328 d36ae5f707fb 11 days ago 1.57GB 2026-04-09 07:45:32.767727 | orchestrator | registry.osism.tech/kolla/release/2025.1/fluentd 5.0.9.20260328 e1596a0c11a4 11 days ago 590MB 2026-04-09 07:45:32.767736 | orchestrator | registry.osism.tech/kolla/release/2025.1/memcached 1.6.24.20260328 09b41eff0fc1 11 days ago 277MB 2026-04-09 07:45:32.767746 | orchestrator | registry.osism.tech/kolla/release/2025.1/grafana 12.4.2.20260328 3842b7ef2d0c 11 days ago 1.04GB 2026-04-09 07:45:32.767754 | orchestrator | registry.osism.tech/kolla/release/2025.1/rabbitmq 4.1.8.20260328 c6408fdc6cf4 11 days ago 350MB 2026-04-09 07:45:32.767763 | orchestrator | registry.osism.tech/kolla/release/2025.1/proxysql 3.0.6.20260328 ccffdf9574f0 11 days ago 427MB 2026-04-09 07:45:32.767772 | orchestrator | registry.osism.tech/kolla/release/2025.1/kolla-toolbox 20.3.1.20260328 28c0d33bbf93 11 days ago 683MB 2026-04-09 07:45:32.767781 | orchestrator | registry.osism.tech/kolla/release/2025.1/cron 3.0.20260328 83ceba86723e 11 days ago 277MB 2026-04-09 07:45:32.767790 | orchestrator | registry.osism.tech/kolla/release/2025.1/haproxy 2.8.16.20260328 cf24d3343dd6 11 days ago 285MB 2026-04-09 07:45:32.767798 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-db-server 3.5.1.20260328 2df964b9b6ef 11 days ago 293MB 2026-04-09 07:45:32.767808 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd 3.5.1.20260328 d56dc4fd4981 11 days ago 293MB 2026-04-09 07:45:32.767823 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis-sentinel 7.0.15.20260328 c513d0722dfc 11 days ago 284MB 2026-04-09 07:45:32.767837 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis 7.0.15.20260328 0640729e8c26 11 days ago 284MB 2026-04-09 07:45:32.767851 | orchestrator | registry.osism.tech/kolla/release/2025.1/horizon 25.3.3.20260328 ee0ad6e2185e 11 days ago 1.2GB 2026-04-09 07:45:32.767897 | orchestrator | registry.osism.tech/kolla/release/2025.1/mariadb-server 10.11.16.20260328 886dcd3e3f53 11 days ago 463MB 2026-04-09 07:45:32.767917 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter 0.15.0.20260328 995036f125d2 11 days ago 309MB 2026-04-09 07:45:32.767932 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor 0.49.2.20260328 f7140e8a13d8 11 days ago 368MB 2026-04-09 07:45:32.767946 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter 1.8.0.20260328 c9ee75870dff 11 days ago 303MB 2026-04-09 07:45:32.767960 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter 0.16.0.20260328 117acc95a5ad 11 days ago 312MB 2026-04-09 07:45:32.767974 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter 1.8.2.20260328 4d11b36c2bda 11 days ago 317MB 2026-04-09 07:45:32.767986 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server 25.3.1.20260328 859fd9ce89d9 11 days ago 301MB 2026-04-09 07:45:32.767995 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server 25.3.1.20260328 fb0f3707730d 11 days ago 301MB 2026-04-09 07:45:32.768003 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-northd 25.3.1.20260328 65c0953e4c39 11 days ago 301MB 2026-04-09 07:45:32.768012 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-controller 25.3.1.20260328 3228ba87088e 11 days ago 301MB 2026-04-09 07:45:32.768020 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone 27.0.1.20260328 b31ea490ee2a 11 days ago 1.09GB 2026-04-09 07:45:32.768037 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-ssh 27.0.1.20260328 40f5d9a677d1 11 days ago 1.06GB 2026-04-09 07:45:32.768046 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-fernet 27.0.1.20260328 f133afc9d53b 11 days ago 1.05GB 2026-04-09 07:45:32.768073 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-central 24.0.1.20260328 d407dd61fee1 11 days ago 997MB 2026-04-09 07:45:32.768083 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-notification 24.0.1.20260328 a0d400ce4fdd 11 days ago 996MB 2026-04-09 07:45:32.768091 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-driver-agent 16.0.2.20260328 f07869d78758 11 days ago 1.07GB 2026-04-09 07:45:32.768100 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-api 16.0.2.20260328 7118289a0d17 11 days ago 1.07GB 2026-04-09 07:45:32.768108 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-worker 16.0.2.20260328 1065bc696018 11 days ago 1.05GB 2026-04-09 07:45:32.768117 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-health-manager 16.0.2.20260328 0adbcb202c49 11 days ago 1.05GB 2026-04-09 07:45:32.768125 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-housekeeping 16.0.2.20260328 1e4a4601f94f 11 days ago 1.05GB 2026-04-09 07:45:32.768134 | orchestrator | registry.osism.tech/kolla/release/2025.1/placement-api 13.0.0.20260328 b52f42ecbb4d 11 days ago 996MB 2026-04-09 07:45:32.768142 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-listener 20.0.0.20260328 afbc43250d60 11 days ago 995MB 2026-04-09 07:45:32.768151 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-evaluator 20.0.0.20260328 26d81adaeaae 11 days ago 995MB 2026-04-09 07:45:32.768159 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-notifier 20.0.0.20260328 aa74bb4c136d 11 days ago 995MB 2026-04-09 07:45:32.768168 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-api 20.0.0.20260328 bb920611ad39 11 days ago 994MB 2026-04-09 07:45:32.768176 | orchestrator | registry.osism.tech/kolla/release/2025.1/glance-api 30.1.1.20260328 525bb863082d 11 days ago 1.12GB 2026-04-09 07:45:32.768185 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-volume 26.2.1.20260328 78cc3d4efb57 11 days ago 1.79GB 2026-04-09 07:45:32.768193 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-scheduler 26.2.1.20260328 b72d2e7568f8 11 days ago 1.43GB 2026-04-09 07:45:32.768202 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-api 26.2.1.20260328 2583a0d99734 11 days ago 1.43GB 2026-04-09 07:45:32.768211 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-backup 26.2.1.20260328 a970df3ae580 11 days ago 1.44GB 2026-04-09 07:45:32.768219 | orchestrator | registry.osism.tech/kolla/release/2025.1/neutron-server 26.0.3.20260328 b084449c71f7 11 days ago 1.24GB 2026-04-09 07:45:32.768233 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-console 6.0.1.20260328 cf9981ab1a70 11 days ago 1.07GB 2026-04-09 07:45:32.768242 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-apiserver 6.0.1.20260328 d52b28f7bdf2 11 days ago 1.02GB 2026-04-09 07:45:32.768251 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-worker 20.0.1.20260328 10c316f8a88d 11 days ago 1GB 2026-04-09 07:45:32.768259 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener 20.0.1.20260328 f1c21f7912dc 11 days ago 1GB 2026-04-09 07:45:32.768268 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-api 20.0.1.20260328 43f0933a84ab 11 days ago 1GB 2026-04-09 07:45:32.768276 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-conductor 20.0.2.20260328 8cf236db44c6 11 days ago 1.27GB 2026-04-09 07:45:32.768290 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-api 20.0.2.20260328 9a37ca6883b8 11 days ago 1.15GB 2026-04-09 07:45:32.768299 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-backend-bind9 20.0.1.20260328 bc68ee83deb0 11 days ago 1.01GB 2026-04-09 07:45:32.768308 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-api 20.0.1.20260328 c0c239664d22 11 days ago 1GB 2026-04-09 07:45:32.768320 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-mdns 20.0.1.20260328 c268b1854421 11 days ago 1GB 2026-04-09 07:45:32.768329 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-worker 20.0.1.20260328 3ce3202d2f8d 11 days ago 1.01GB 2026-04-09 07:45:32.768338 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-central 20.0.1.20260328 50fabfae16b4 11 days ago 1GB 2026-04-09 07:45:32.768346 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-producer 20.0.1.20260328 23baf4bae3a6 11 days ago 1GB 2026-04-09 07:45:32.768361 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-api 31.2.1.20260328 7100cf172da2 11 days ago 1.23GB 2026-04-09 07:45:32.768371 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-novncproxy 31.2.1.20260328 003749dfd921 11 days ago 1.39GB 2026-04-09 07:45:32.768379 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-scheduler 31.2.1.20260328 0b8714cecfd8 11 days ago 1.23GB 2026-04-09 07:45:32.768388 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-conductor 31.2.1.20260328 d35210169004 11 days ago 1.23GB 2026-04-09 07:45:32.768397 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-data 20.0.2.20260328 5c1ce4fd1849 11 days ago 1.07GB 2026-04-09 07:45:32.768405 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-scheduler 20.0.2.20260328 29e4081372f9 11 days ago 1.07GB 2026-04-09 07:45:32.768414 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-api 20.0.2.20260328 949d0dfdab5b 11 days ago 1.07GB 2026-04-09 07:45:32.768422 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-share 20.0.2.20260328 d5693cb24e6d 11 days ago 1.24GB 2026-04-09 07:45:32.768431 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay 25.3.1.20260328 08ae9a102f53 11 days ago 301MB 2026-04-09 07:45:32.768443 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-09 07:45:32.768490 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-09 07:45:32.768505 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-09 07:45:32.768521 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-09 07:45:32.768535 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-09 07:45:32.768547 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-09 07:45:32.768557 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-09 07:45:32.768565 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-09 07:45:32.768574 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-09 07:45:32.768589 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-09 07:45:32.768598 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-09 07:45:32.768606 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-09 07:45:32.768615 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-09 07:45:32.768634 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-09 07:45:32.768687 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-09 07:45:32.768698 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-09 07:45:32.768707 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-09 07:45:32.768715 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-09 07:45:32.768724 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-09 07:45:32.768733 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-09 07:45:32.768741 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-09 07:45:32.768756 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-09 07:45:32.768766 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-09 07:45:32.768774 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-09 07:45:32.768783 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-09 07:45:32.768792 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-09 07:45:32.768800 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-09 07:45:32.768809 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-09 07:45:32.768818 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-09 07:45:32.768826 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-09 07:45:32.768835 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-09 07:45:32.768844 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-09 07:45:32.768852 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-09 07:45:32.768861 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-09 07:45:32.768876 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-09 07:45:32.768885 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-09 07:45:32.768893 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-09 07:45:32.768902 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-09 07:45:32.768910 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-09 07:45:32.768919 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-09 07:45:32.768928 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-09 07:45:32.768936 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-09 07:45:32.768945 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-09 07:45:32.768953 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-09 07:45:32.768962 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-09 07:45:32.768971 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-09 07:45:32.768979 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-09 07:45:32.768988 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-09 07:45:32.768997 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-09 07:45:32.769006 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-09 07:45:32.769014 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-09 07:45:32.769028 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-09 07:45:32.769037 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-09 07:45:32.769050 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-09 07:45:32.769059 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-09 07:45:32.769067 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-09 07:45:32.769076 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-09 07:45:32.769085 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-09 07:45:32.769093 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-09 07:45:32.769107 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-09 07:45:32.769116 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-09 07:45:32.769125 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-09 07:45:32.769134 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-09 07:45:32.769142 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-09 07:45:32.769151 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-09 07:45:32.769159 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-09 07:45:32.769168 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-09 07:45:32.769177 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-09 07:45:32.769185 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-09 07:45:32.913662 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-09 07:45:32.914134 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-09 07:45:32.965576 | orchestrator | 2026-04-09 07:45:32.965668 | orchestrator | ## Containers @ testbed-node-1 2026-04-09 07:45:32.965680 | orchestrator | 2026-04-09 07:45:32.965689 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-09 07:45:32.965698 | orchestrator | + echo 2026-04-09 07:45:32.965709 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-04-09 07:45:32.965719 | orchestrator | + echo 2026-04-09 07:45:32.965728 | orchestrator | + osism container testbed-node-1 ps 2026-04-09 07:45:34.549574 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-09 07:45:34.549678 | orchestrator | e335753e9f99 registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328 "dumb-init --single-…" 14 seconds ago Up 12 seconds (health: starting) magnum_conductor 2026-04-09 07:45:34.549695 | orchestrator | 3895cb9e65c8 registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328 "dumb-init --single-…" 46 seconds ago Up 45 seconds (healthy) magnum_api 2026-04-09 07:45:34.549708 | orchestrator | bca3e104b584 registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328 "dumb-init --single-…" 3 minutes ago Up 3 minutes grafana 2026-04-09 07:45:34.549719 | orchestrator | c8544bcce50c registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_elasticsearch_exporter 2026-04-09 07:45:34.549733 | orchestrator | 3d1440b61924 registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_cadvisor 2026-04-09 07:45:34.549744 | orchestrator | 00fd13ef18d8 registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_memcached_exporter 2026-04-09 07:45:34.549763 | orchestrator | 7ecef0b41733 registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_mysqld_exporter 2026-04-09 07:45:34.549813 | orchestrator | e6946da50c8e registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_node_exporter 2026-04-09 07:45:34.549832 | orchestrator | 921835ea2ed9 registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_share 2026-04-09 07:45:34.549851 | orchestrator | f31d1c24e839 registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-04-09 07:45:34.549863 | orchestrator | 2bd0acd37912 registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-04-09 07:45:34.549874 | orchestrator | ef27507dc367 registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-04-09 07:45:34.549885 | orchestrator | 20a9df2a790d registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) octavia_worker 2026-04-09 07:45:34.549896 | orchestrator | b3e23a2edbb4 registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) octavia_housekeeping 2026-04-09 07:45:34.549907 | orchestrator | 6cefb6a38f29 registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) octavia_health_manager 2026-04-09 07:45:34.549935 | orchestrator | 68a250a2a9a1 registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328 "dumb-init --single-…" 18 minutes ago Up 18 minutes octavia_driver_agent 2026-04-09 07:45:34.549947 | orchestrator | 9761f392ed7c registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) octavia_api 2026-04-09 07:45:34.549978 | orchestrator | d216b77c790b registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) aodh_notifier 2026-04-09 07:45:34.549990 | orchestrator | a281c1fc7393 registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) aodh_listener 2026-04-09 07:45:34.550001 | orchestrator | 86205e5089ed registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) aodh_evaluator 2026-04-09 07:45:34.550070 | orchestrator | 26349356aa67 registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) aodh_api 2026-04-09 07:45:34.550087 | orchestrator | 7753e809429f registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328 "dumb-init --single-…" 24 minutes ago Up 24 minutes ceilometer_central 2026-04-09 07:45:34.550101 | orchestrator | aab21f9174e3 registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) ceilometer_notification 2026-04-09 07:45:34.550113 | orchestrator | 9858adcf1494 registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-04-09 07:45:34.550135 | orchestrator | 41e07bebb51e registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-04-09 07:45:34.550147 | orchestrator | 6261e4c34a0b registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-04-09 07:45:34.550159 | orchestrator | b56574b13f67 registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_central 2026-04-09 07:45:34.550172 | orchestrator | 6fdc087bd260 registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_api 2026-04-09 07:45:34.550185 | orchestrator | 387299c338ab registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_backend_bind9 2026-04-09 07:45:34.550197 | orchestrator | 1ba895c478f4 registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_worker 2026-04-09 07:45:34.550209 | orchestrator | 710c2de2dd6e registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_keystone_listener 2026-04-09 07:45:34.550222 | orchestrator | 615ecc75e26f registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-04-09 07:45:34.550234 | orchestrator | 55575c07c041 registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 34 minutes (healthy) cinder_backup 2026-04-09 07:45:34.550246 | orchestrator | 3da47ae4459c registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 34 minutes (healthy) cinder_volume 2026-04-09 07:45:34.550265 | orchestrator | 8df9e5246431 registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328 "dumb-init --single-…" 36 minutes ago Up 34 minutes (healthy) cinder_scheduler 2026-04-09 07:45:34.550285 | orchestrator | aef80b63dfbb registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328 "dumb-init --single-…" 36 minutes ago Up 35 minutes (healthy) cinder_api 2026-04-09 07:45:34.550313 | orchestrator | 905d0c04a64e registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328 "dumb-init --single-…" 40 minutes ago Up 40 minutes (healthy) glance_api 2026-04-09 07:45:34.550334 | orchestrator | 384a25dd544c registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) skyline_console 2026-04-09 07:45:34.550353 | orchestrator | 1d45d0ab8498 registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) skyline_apiserver 2026-04-09 07:45:34.550371 | orchestrator | 95c217592b84 registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) horizon 2026-04-09 07:45:34.550401 | orchestrator | 604cab027b71 registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328 "dumb-init --single-…" 58 minutes ago Up 48 minutes (healthy) nova_novncproxy 2026-04-09 07:45:34.550420 | orchestrator | d56c819e0021 registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328 "dumb-init --single-…" 59 minutes ago Up 48 minutes (healthy) nova_conductor 2026-04-09 07:45:34.550439 | orchestrator | 3635b8c3f125 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) nova_metadata 2026-04-09 07:45:34.550532 | orchestrator | e71368881720 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 48 minutes (healthy) nova_api 2026-04-09 07:45:34.550546 | orchestrator | 86eefa805644 registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 48 minutes (healthy) nova_scheduler 2026-04-09 07:45:34.550557 | orchestrator | d956f9d17d35 registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) neutron_server 2026-04-09 07:45:34.550568 | orchestrator | 508dfb852a42 registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) placement_api 2026-04-09 07:45:34.550587 | orchestrator | 1a0503603ceb registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone 2026-04-09 07:45:34.550599 | orchestrator | 3db49ad6f105 registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_fernet 2026-04-09 07:45:34.550611 | orchestrator | e23e043bfd80 registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_ssh 2026-04-09 07:45:34.550622 | orchestrator | 4c757dcc13db registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-04-09 07:45:34.550633 | orchestrator | a33294a05811 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 2 hours ago Up 2 hours ceph-mgr-testbed-node-1 2026-04-09 07:45:34.550644 | orchestrator | 3e7867c40460 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 2 hours ago Up 2 hours ceph-mon-testbed-node-1 2026-04-09 07:45:34.550655 | orchestrator | 1c11f73abcda registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_northd 2026-04-09 07:45:34.550674 | orchestrator | eb5a8c7e67ff registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_sb_db_relay_1 2026-04-09 07:45:34.550692 | orchestrator | 6d9be8f3b4f3 registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_sb_db 2026-04-09 07:45:34.550722 | orchestrator | c367d6543d53 registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_nb_db 2026-04-09 07:45:34.550752 | orchestrator | 5d8143e6c606 registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_controller 2026-04-09 07:45:34.550771 | orchestrator | 6fd1957431d4 registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) openvswitch_vswitchd 2026-04-09 07:45:34.550797 | orchestrator | d8169b084a37 registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) openvswitch_db 2026-04-09 07:45:34.550814 | orchestrator | 48921cd2a5b5 registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) rabbitmq 2026-04-09 07:45:34.550831 | orchestrator | cdb73f6759f2 registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328 "dumb-init -- kolla_…" 3 hours ago Up 3 hours (healthy) mariadb 2026-04-09 07:45:34.550849 | orchestrator | 1a491027f2e5 registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) redis_sentinel 2026-04-09 07:45:34.550867 | orchestrator | e19d7b461ef0 registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) redis 2026-04-09 07:45:34.550883 | orchestrator | 0cda70afed52 registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) memcached 2026-04-09 07:45:34.550902 | orchestrator | 083d66a8689a registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) opensearch_dashboards 2026-04-09 07:45:34.550919 | orchestrator | df1267e8234d registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) opensearch 2026-04-09 07:45:34.550937 | orchestrator | 9f0f4755d7e9 registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours keepalived 2026-04-09 07:45:34.550955 | orchestrator | bd4facf41573 registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) proxysql 2026-04-09 07:45:34.550974 | orchestrator | 971e502196cb registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) haproxy 2026-04-09 07:45:34.550993 | orchestrator | 3efed4060c0c registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours cron 2026-04-09 07:45:34.551012 | orchestrator | 5013cc1ca169 registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours kolla_toolbox 2026-04-09 07:45:34.551032 | orchestrator | ea81925f9312 registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours fluentd 2026-04-09 07:45:34.711887 | orchestrator | 2026-04-09 07:45:34.711982 | orchestrator | ## Images @ testbed-node-1 2026-04-09 07:45:34.711997 | orchestrator | 2026-04-09 07:45:34.712010 | orchestrator | + echo 2026-04-09 07:45:34.712023 | orchestrator | + echo '## Images @ testbed-node-1' 2026-04-09 07:45:34.712060 | orchestrator | + echo 2026-04-09 07:45:34.712072 | orchestrator | + osism container testbed-node-1 images 2026-04-09 07:45:36.407656 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-09 07:45:36.407750 | orchestrator | registry.osism.tech/kolla/release/2025.1/keepalived 2.2.8.20260328 cc29bd9a85e4 11 days ago 288MB 2026-04-09 07:45:36.407762 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch-dashboards 2.19.5.20260328 f834ead10f11 11 days ago 1.54GB 2026-04-09 07:45:36.407771 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch 2.19.5.20260328 d36ae5f707fb 11 days ago 1.57GB 2026-04-09 07:45:36.407779 | orchestrator | registry.osism.tech/kolla/release/2025.1/fluentd 5.0.9.20260328 e1596a0c11a4 11 days ago 590MB 2026-04-09 07:45:36.407788 | orchestrator | registry.osism.tech/kolla/release/2025.1/memcached 1.6.24.20260328 09b41eff0fc1 11 days ago 277MB 2026-04-09 07:45:36.407796 | orchestrator | registry.osism.tech/kolla/release/2025.1/grafana 12.4.2.20260328 3842b7ef2d0c 11 days ago 1.04GB 2026-04-09 07:45:36.407804 | orchestrator | registry.osism.tech/kolla/release/2025.1/proxysql 3.0.6.20260328 ccffdf9574f0 11 days ago 427MB 2026-04-09 07:45:36.407816 | orchestrator | registry.osism.tech/kolla/release/2025.1/rabbitmq 4.1.8.20260328 c6408fdc6cf4 11 days ago 350MB 2026-04-09 07:45:36.407830 | orchestrator | registry.osism.tech/kolla/release/2025.1/kolla-toolbox 20.3.1.20260328 28c0d33bbf93 11 days ago 683MB 2026-04-09 07:45:36.407844 | orchestrator | registry.osism.tech/kolla/release/2025.1/cron 3.0.20260328 83ceba86723e 11 days ago 277MB 2026-04-09 07:45:36.407858 | orchestrator | registry.osism.tech/kolla/release/2025.1/haproxy 2.8.16.20260328 cf24d3343dd6 11 days ago 285MB 2026-04-09 07:45:36.407873 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-db-server 3.5.1.20260328 2df964b9b6ef 11 days ago 293MB 2026-04-09 07:45:36.407887 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd 3.5.1.20260328 d56dc4fd4981 11 days ago 293MB 2026-04-09 07:45:36.407897 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis-sentinel 7.0.15.20260328 c513d0722dfc 11 days ago 284MB 2026-04-09 07:45:36.407922 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis 7.0.15.20260328 0640729e8c26 11 days ago 284MB 2026-04-09 07:45:36.407931 | orchestrator | registry.osism.tech/kolla/release/2025.1/horizon 25.3.3.20260328 ee0ad6e2185e 11 days ago 1.2GB 2026-04-09 07:45:36.407940 | orchestrator | registry.osism.tech/kolla/release/2025.1/mariadb-server 10.11.16.20260328 886dcd3e3f53 11 days ago 463MB 2026-04-09 07:45:36.407948 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter 0.15.0.20260328 995036f125d2 11 days ago 309MB 2026-04-09 07:45:36.407956 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor 0.49.2.20260328 f7140e8a13d8 11 days ago 368MB 2026-04-09 07:45:36.407963 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter 1.8.0.20260328 c9ee75870dff 11 days ago 303MB 2026-04-09 07:45:36.407971 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter 0.16.0.20260328 117acc95a5ad 11 days ago 312MB 2026-04-09 07:45:36.407979 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter 1.8.2.20260328 4d11b36c2bda 11 days ago 317MB 2026-04-09 07:45:36.407987 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server 25.3.1.20260328 859fd9ce89d9 11 days ago 301MB 2026-04-09 07:45:36.407995 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server 25.3.1.20260328 fb0f3707730d 11 days ago 301MB 2026-04-09 07:45:36.408018 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-controller 25.3.1.20260328 3228ba87088e 11 days ago 301MB 2026-04-09 07:45:36.408027 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-northd 25.3.1.20260328 65c0953e4c39 11 days ago 301MB 2026-04-09 07:45:36.408035 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone 27.0.1.20260328 b31ea490ee2a 11 days ago 1.09GB 2026-04-09 07:45:36.408043 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-ssh 27.0.1.20260328 40f5d9a677d1 11 days ago 1.06GB 2026-04-09 07:45:36.408050 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-fernet 27.0.1.20260328 f133afc9d53b 11 days ago 1.05GB 2026-04-09 07:45:36.408072 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-central 24.0.1.20260328 d407dd61fee1 11 days ago 997MB 2026-04-09 07:45:36.408081 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-notification 24.0.1.20260328 a0d400ce4fdd 11 days ago 996MB 2026-04-09 07:45:36.408089 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-driver-agent 16.0.2.20260328 f07869d78758 11 days ago 1.07GB 2026-04-09 07:45:36.408097 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-api 16.0.2.20260328 7118289a0d17 11 days ago 1.07GB 2026-04-09 07:45:36.408104 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-worker 16.0.2.20260328 1065bc696018 11 days ago 1.05GB 2026-04-09 07:45:36.408113 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-health-manager 16.0.2.20260328 0adbcb202c49 11 days ago 1.05GB 2026-04-09 07:45:36.408120 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-housekeeping 16.0.2.20260328 1e4a4601f94f 11 days ago 1.05GB 2026-04-09 07:45:36.408128 | orchestrator | registry.osism.tech/kolla/release/2025.1/placement-api 13.0.0.20260328 b52f42ecbb4d 11 days ago 996MB 2026-04-09 07:45:36.408140 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-listener 20.0.0.20260328 afbc43250d60 11 days ago 995MB 2026-04-09 07:45:36.408148 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-evaluator 20.0.0.20260328 26d81adaeaae 11 days ago 995MB 2026-04-09 07:45:36.408156 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-notifier 20.0.0.20260328 aa74bb4c136d 11 days ago 995MB 2026-04-09 07:45:36.408163 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-api 20.0.0.20260328 bb920611ad39 11 days ago 994MB 2026-04-09 07:45:36.408171 | orchestrator | registry.osism.tech/kolla/release/2025.1/glance-api 30.1.1.20260328 525bb863082d 11 days ago 1.12GB 2026-04-09 07:45:36.408179 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-volume 26.2.1.20260328 78cc3d4efb57 11 days ago 1.79GB 2026-04-09 07:45:36.408187 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-scheduler 26.2.1.20260328 b72d2e7568f8 11 days ago 1.43GB 2026-04-09 07:45:36.408194 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-api 26.2.1.20260328 2583a0d99734 11 days ago 1.43GB 2026-04-09 07:45:36.408208 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-backup 26.2.1.20260328 a970df3ae580 11 days ago 1.44GB 2026-04-09 07:45:36.408221 | orchestrator | registry.osism.tech/kolla/release/2025.1/neutron-server 26.0.3.20260328 b084449c71f7 11 days ago 1.24GB 2026-04-09 07:45:36.408235 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-console 6.0.1.20260328 cf9981ab1a70 11 days ago 1.07GB 2026-04-09 07:45:36.408249 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-apiserver 6.0.1.20260328 d52b28f7bdf2 11 days ago 1.02GB 2026-04-09 07:45:36.408270 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-worker 20.0.1.20260328 10c316f8a88d 11 days ago 1GB 2026-04-09 07:45:36.408285 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener 20.0.1.20260328 f1c21f7912dc 11 days ago 1GB 2026-04-09 07:45:36.408299 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-api 20.0.1.20260328 43f0933a84ab 11 days ago 1GB 2026-04-09 07:45:36.408311 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-conductor 20.0.2.20260328 8cf236db44c6 11 days ago 1.27GB 2026-04-09 07:45:36.408325 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-api 20.0.2.20260328 9a37ca6883b8 11 days ago 1.15GB 2026-04-09 07:45:36.408339 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-backend-bind9 20.0.1.20260328 bc68ee83deb0 11 days ago 1.01GB 2026-04-09 07:45:36.408354 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-api 20.0.1.20260328 c0c239664d22 11 days ago 1GB 2026-04-09 07:45:36.408473 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-mdns 20.0.1.20260328 c268b1854421 11 days ago 1GB 2026-04-09 07:45:36.408484 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-worker 20.0.1.20260328 3ce3202d2f8d 11 days ago 1.01GB 2026-04-09 07:45:36.408493 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-central 20.0.1.20260328 50fabfae16b4 11 days ago 1GB 2026-04-09 07:45:36.408505 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-producer 20.0.1.20260328 23baf4bae3a6 11 days ago 1GB 2026-04-09 07:45:36.408528 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-api 31.2.1.20260328 7100cf172da2 11 days ago 1.23GB 2026-04-09 07:45:36.408543 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-novncproxy 31.2.1.20260328 003749dfd921 11 days ago 1.39GB 2026-04-09 07:45:36.408557 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-scheduler 31.2.1.20260328 0b8714cecfd8 11 days ago 1.23GB 2026-04-09 07:45:36.408570 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-conductor 31.2.1.20260328 d35210169004 11 days ago 1.23GB 2026-04-09 07:45:36.408584 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-data 20.0.2.20260328 5c1ce4fd1849 11 days ago 1.07GB 2026-04-09 07:45:36.408716 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-scheduler 20.0.2.20260328 29e4081372f9 11 days ago 1.07GB 2026-04-09 07:45:36.408729 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-api 20.0.2.20260328 949d0dfdab5b 11 days ago 1.07GB 2026-04-09 07:45:36.408750 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-share 20.0.2.20260328 d5693cb24e6d 11 days ago 1.24GB 2026-04-09 07:45:36.408759 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay 25.3.1.20260328 08ae9a102f53 11 days ago 301MB 2026-04-09 07:45:36.408767 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-09 07:45:36.408774 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-09 07:45:36.408782 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-09 07:45:36.408790 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-09 07:45:36.408798 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-09 07:45:36.408832 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-09 07:45:36.408845 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-09 07:45:36.408859 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-09 07:45:36.408872 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-09 07:45:36.408893 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-09 07:45:36.408907 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-09 07:45:36.408920 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-09 07:45:36.408938 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-09 07:45:36.408952 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-09 07:45:36.408962 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-09 07:45:36.408970 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-09 07:45:36.408984 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-09 07:45:36.408996 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-09 07:45:36.409010 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-09 07:45:36.409022 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-09 07:45:36.409035 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-09 07:45:36.409049 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-09 07:45:36.409062 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-09 07:45:36.409073 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-09 07:45:36.409081 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-09 07:45:36.409089 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-09 07:45:36.409103 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-09 07:45:36.409111 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-09 07:45:36.409119 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-09 07:45:36.409132 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-09 07:45:36.409147 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-09 07:45:36.409155 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-09 07:45:36.409163 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-09 07:45:36.409174 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-09 07:45:36.409187 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-09 07:45:36.409200 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-09 07:45:36.409213 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-09 07:45:36.409227 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-09 07:45:36.409235 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-09 07:45:36.409243 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-09 07:45:36.409314 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-09 07:45:36.409328 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-09 07:45:36.409341 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-09 07:45:36.409354 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-09 07:45:36.409367 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-09 07:45:36.409381 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-09 07:45:36.409394 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-09 07:45:36.409408 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-09 07:45:36.409420 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-09 07:45:36.409434 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-09 07:45:36.409479 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-09 07:45:36.409495 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-09 07:45:36.409507 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-09 07:45:36.409521 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-09 07:45:36.409536 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-09 07:45:36.409558 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-09 07:45:36.409572 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-09 07:45:36.409593 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-09 07:45:36.409606 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-09 07:45:36.409620 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-09 07:45:36.409632 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-09 07:45:36.409645 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-09 07:45:36.409658 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-09 07:45:36.409671 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-09 07:45:36.409684 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-09 07:45:36.409697 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-09 07:45:36.409705 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-09 07:45:36.409713 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-09 07:45:36.409721 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-09 07:45:36.564210 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-09 07:45:36.564303 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-09 07:45:36.618858 | orchestrator | 2026-04-09 07:45:36.618957 | orchestrator | ## Containers @ testbed-node-2 2026-04-09 07:45:36.618972 | orchestrator | 2026-04-09 07:45:36.618983 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-09 07:45:36.618995 | orchestrator | + echo 2026-04-09 07:45:36.619007 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-04-09 07:45:36.619019 | orchestrator | + echo 2026-04-09 07:45:36.619031 | orchestrator | + osism container testbed-node-2 ps 2026-04-09 07:45:38.127768 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-09 07:45:38.127868 | orchestrator | acfcd1b709cb registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328 "dumb-init --single-…" 28 seconds ago Up 27 seconds (health: starting) magnum_conductor 2026-04-09 07:45:38.127886 | orchestrator | d71b3f12840f registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328 "dumb-init --single-…" 51 seconds ago Up 49 seconds (healthy) magnum_api 2026-04-09 07:45:38.127899 | orchestrator | c0a8647f40fd registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328 "dumb-init --single-…" 3 minutes ago Up 3 minutes grafana 2026-04-09 07:45:38.127910 | orchestrator | 65cf2a1dced5 registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_elasticsearch_exporter 2026-04-09 07:45:38.127923 | orchestrator | d55a58a7b7f2 registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_cadvisor 2026-04-09 07:45:38.127975 | orchestrator | 82707bfae55d registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328 "dumb-init --single-…" 7 minutes ago Up 6 minutes prometheus_memcached_exporter 2026-04-09 07:45:38.127988 | orchestrator | 84b2f48c0ea0 registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_mysqld_exporter 2026-04-09 07:45:38.128000 | orchestrator | 26ad17c56f5a registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_node_exporter 2026-04-09 07:45:38.128011 | orchestrator | a6687753b6d5 registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_share 2026-04-09 07:45:38.128022 | orchestrator | f9ef97b7396d registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-04-09 07:45:38.128033 | orchestrator | ef7c0784ab69 registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-04-09 07:45:38.128049 | orchestrator | 2680145aa45b registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 15 minutes (healthy) manila_api 2026-04-09 07:45:38.128060 | orchestrator | f5d0af96ab39 registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) octavia_worker 2026-04-09 07:45:38.128071 | orchestrator | 4b73b26d933a registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) octavia_housekeeping 2026-04-09 07:45:38.128082 | orchestrator | 928e6341cbf9 registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) octavia_health_manager 2026-04-09 07:45:38.128093 | orchestrator | 3d15f2ae8342 registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328 "dumb-init --single-…" 18 minutes ago Up 18 minutes octavia_driver_agent 2026-04-09 07:45:38.128104 | orchestrator | 5dab32549011 registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) octavia_api 2026-04-09 07:45:38.128133 | orchestrator | 8e5f4ae0c205 registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) aodh_notifier 2026-04-09 07:45:38.128145 | orchestrator | 60e6930b04da registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) aodh_listener 2026-04-09 07:45:38.128156 | orchestrator | f727c0336a6d registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) aodh_evaluator 2026-04-09 07:45:38.128167 | orchestrator | 05361ac16238 registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) aodh_api 2026-04-09 07:45:38.128186 | orchestrator | 541c8d5806cf registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328 "dumb-init --single-…" 24 minutes ago Up 24 minutes ceilometer_central 2026-04-09 07:45:38.128197 | orchestrator | da4a9734c9ea registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) ceilometer_notification 2026-04-09 07:45:38.128208 | orchestrator | 76ff4f00f806 registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-04-09 07:45:38.128219 | orchestrator | f728ffec5ea4 registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-04-09 07:45:38.128229 | orchestrator | ad2e15e62674 registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-04-09 07:45:38.128240 | orchestrator | 12a9a4029e5f registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_central 2026-04-09 07:45:38.128251 | orchestrator | 3925745c97ee registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_api 2026-04-09 07:45:38.128261 | orchestrator | 06ffc05cb859 registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_backend_bind9 2026-04-09 07:45:38.128272 | orchestrator | 9fae53222077 registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_worker 2026-04-09 07:45:38.128288 | orchestrator | 1b11ddd37b3a registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_keystone_listener 2026-04-09 07:45:38.128299 | orchestrator | 3ec2d22689c5 registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-04-09 07:45:38.128310 | orchestrator | cfe1279e313d registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 34 minutes (healthy) cinder_backup 2026-04-09 07:45:38.128321 | orchestrator | efebadfeecda registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328 "dumb-init --single-…" 36 minutes ago Up 34 minutes (healthy) cinder_volume 2026-04-09 07:45:38.128331 | orchestrator | af82e67d4970 registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328 "dumb-init --single-…" 36 minutes ago Up 35 minutes (healthy) cinder_scheduler 2026-04-09 07:45:38.128342 | orchestrator | ef07e4294898 registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328 "dumb-init --single-…" 36 minutes ago Up 35 minutes (healthy) cinder_api 2026-04-09 07:45:38.128359 | orchestrator | b2391ad9ef4c registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) glance_api 2026-04-09 07:45:38.128370 | orchestrator | 569374cab5c7 registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) skyline_console 2026-04-09 07:45:38.128485 | orchestrator | d775eb173855 registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) skyline_apiserver 2026-04-09 07:45:38.128500 | orchestrator | f0ad9050c09e registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) horizon 2026-04-09 07:45:38.128511 | orchestrator | 9ab6da8e6c0b registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328 "dumb-init --single-…" 58 minutes ago Up 48 minutes (healthy) nova_novncproxy 2026-04-09 07:45:38.128522 | orchestrator | b7487b58fa81 registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328 "dumb-init --single-…" 59 minutes ago Up 49 minutes (healthy) nova_conductor 2026-04-09 07:45:38.128532 | orchestrator | 1ed92f074f50 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) nova_metadata 2026-04-09 07:45:38.128543 | orchestrator | 52acb5f947f0 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 48 minutes (healthy) nova_api 2026-04-09 07:45:38.128554 | orchestrator | 0e3c64d4d3fc registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 48 minutes (healthy) nova_scheduler 2026-04-09 07:45:38.128756 | orchestrator | 6edaab62a0a0 registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) neutron_server 2026-04-09 07:45:38.128775 | orchestrator | c77832d647b8 registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) placement_api 2026-04-09 07:45:38.128787 | orchestrator | fbfcb9f9e666 registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone 2026-04-09 07:45:38.128798 | orchestrator | 7bc040b1ce24 registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_fernet 2026-04-09 07:45:38.128808 | orchestrator | 3c75f1912eac registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_ssh 2026-04-09 07:45:38.128819 | orchestrator | 2e0796ee4799 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-04-09 07:45:38.128830 | orchestrator | 695da0837ce5 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 2 hours ago Up 2 hours ceph-mgr-testbed-node-2 2026-04-09 07:45:38.128841 | orchestrator | 5ed6058fb18c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 2 hours ago Up 2 hours ceph-mon-testbed-node-2 2026-04-09 07:45:38.128852 | orchestrator | 0c440e45db61 registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_northd 2026-04-09 07:45:38.128863 | orchestrator | afccc4339836 registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_sb_db_relay_1 2026-04-09 07:45:38.128890 | orchestrator | 546159b1de96 registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_sb_db 2026-04-09 07:45:38.128902 | orchestrator | 5219a9288156 registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_nb_db 2026-04-09 07:45:38.128913 | orchestrator | c354cf7e8d38 registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_controller 2026-04-09 07:45:38.128924 | orchestrator | 02f44242c0d3 registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) openvswitch_vswitchd 2026-04-09 07:45:38.128935 | orchestrator | c00b07b9c06d registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) openvswitch_db 2026-04-09 07:45:38.128945 | orchestrator | ff95c3aba2fd registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) rabbitmq 2026-04-09 07:45:38.128956 | orchestrator | d19e05003738 registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328 "dumb-init -- kolla_…" 3 hours ago Up 3 hours (healthy) mariadb 2026-04-09 07:45:38.128967 | orchestrator | 79a893f6919c registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) redis_sentinel 2026-04-09 07:45:38.128978 | orchestrator | 8778e489e944 registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) redis 2026-04-09 07:45:38.128996 | orchestrator | bd0918d43b8b registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) memcached 2026-04-09 07:45:38.129007 | orchestrator | d8d0887ae9a9 registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) opensearch_dashboards 2026-04-09 07:45:38.129018 | orchestrator | 1881d6e1abf7 registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) opensearch 2026-04-09 07:45:38.129029 | orchestrator | fbd7583db57d registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours keepalived 2026-04-09 07:45:38.129045 | orchestrator | 4836cf6d59ac registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) proxysql 2026-04-09 07:45:38.129056 | orchestrator | 7dc9a25cdd88 registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) haproxy 2026-04-09 07:45:38.129068 | orchestrator | d0ed27a4ba59 registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours cron 2026-04-09 07:45:38.129079 | orchestrator | fceb8b654246 registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours kolla_toolbox 2026-04-09 07:45:38.129097 | orchestrator | 5121955445bd registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours fluentd 2026-04-09 07:45:38.273343 | orchestrator | 2026-04-09 07:45:38.273442 | orchestrator | ## Images @ testbed-node-2 2026-04-09 07:45:38.273529 | orchestrator | 2026-04-09 07:45:38.273543 | orchestrator | + echo 2026-04-09 07:45:38.273557 | orchestrator | + echo '## Images @ testbed-node-2' 2026-04-09 07:45:38.273571 | orchestrator | + echo 2026-04-09 07:45:38.273582 | orchestrator | + osism container testbed-node-2 images 2026-04-09 07:45:39.846668 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-09 07:45:39.846832 | orchestrator | registry.osism.tech/kolla/release/2025.1/keepalived 2.2.8.20260328 cc29bd9a85e4 11 days ago 288MB 2026-04-09 07:45:39.846860 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch-dashboards 2.19.5.20260328 f834ead10f11 11 days ago 1.54GB 2026-04-09 07:45:39.846879 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch 2.19.5.20260328 d36ae5f707fb 11 days ago 1.57GB 2026-04-09 07:45:39.846895 | orchestrator | registry.osism.tech/kolla/release/2025.1/fluentd 5.0.9.20260328 e1596a0c11a4 11 days ago 590MB 2026-04-09 07:45:39.846910 | orchestrator | registry.osism.tech/kolla/release/2025.1/memcached 1.6.24.20260328 09b41eff0fc1 11 days ago 277MB 2026-04-09 07:45:39.846927 | orchestrator | registry.osism.tech/kolla/release/2025.1/grafana 12.4.2.20260328 3842b7ef2d0c 11 days ago 1.04GB 2026-04-09 07:45:39.846944 | orchestrator | registry.osism.tech/kolla/release/2025.1/rabbitmq 4.1.8.20260328 c6408fdc6cf4 11 days ago 350MB 2026-04-09 07:45:39.846960 | orchestrator | registry.osism.tech/kolla/release/2025.1/proxysql 3.0.6.20260328 ccffdf9574f0 11 days ago 427MB 2026-04-09 07:45:39.846975 | orchestrator | registry.osism.tech/kolla/release/2025.1/kolla-toolbox 20.3.1.20260328 28c0d33bbf93 11 days ago 683MB 2026-04-09 07:45:39.846991 | orchestrator | registry.osism.tech/kolla/release/2025.1/cron 3.0.20260328 83ceba86723e 11 days ago 277MB 2026-04-09 07:45:39.847008 | orchestrator | registry.osism.tech/kolla/release/2025.1/haproxy 2.8.16.20260328 cf24d3343dd6 11 days ago 285MB 2026-04-09 07:45:39.847024 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-db-server 3.5.1.20260328 2df964b9b6ef 11 days ago 293MB 2026-04-09 07:45:39.847041 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd 3.5.1.20260328 d56dc4fd4981 11 days ago 293MB 2026-04-09 07:45:39.847057 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis-sentinel 7.0.15.20260328 c513d0722dfc 11 days ago 284MB 2026-04-09 07:45:39.847073 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis 7.0.15.20260328 0640729e8c26 11 days ago 284MB 2026-04-09 07:45:39.847090 | orchestrator | registry.osism.tech/kolla/release/2025.1/horizon 25.3.3.20260328 ee0ad6e2185e 11 days ago 1.2GB 2026-04-09 07:45:39.847107 | orchestrator | registry.osism.tech/kolla/release/2025.1/mariadb-server 10.11.16.20260328 886dcd3e3f53 11 days ago 463MB 2026-04-09 07:45:39.847125 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter 0.15.0.20260328 995036f125d2 11 days ago 309MB 2026-04-09 07:45:39.847143 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor 0.49.2.20260328 f7140e8a13d8 11 days ago 368MB 2026-04-09 07:45:39.847163 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter 1.8.0.20260328 c9ee75870dff 11 days ago 303MB 2026-04-09 07:45:39.847221 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter 0.16.0.20260328 117acc95a5ad 11 days ago 312MB 2026-04-09 07:45:39.847265 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter 1.8.2.20260328 4d11b36c2bda 11 days ago 317MB 2026-04-09 07:45:39.847286 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server 25.3.1.20260328 859fd9ce89d9 11 days ago 301MB 2026-04-09 07:45:39.847306 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server 25.3.1.20260328 fb0f3707730d 11 days ago 301MB 2026-04-09 07:45:39.847326 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-controller 25.3.1.20260328 3228ba87088e 11 days ago 301MB 2026-04-09 07:45:39.847343 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-northd 25.3.1.20260328 65c0953e4c39 11 days ago 301MB 2026-04-09 07:45:39.847363 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone 27.0.1.20260328 b31ea490ee2a 11 days ago 1.09GB 2026-04-09 07:45:39.847382 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-ssh 27.0.1.20260328 40f5d9a677d1 11 days ago 1.06GB 2026-04-09 07:45:39.847403 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-fernet 27.0.1.20260328 f133afc9d53b 11 days ago 1.05GB 2026-04-09 07:45:39.847479 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-central 24.0.1.20260328 d407dd61fee1 11 days ago 997MB 2026-04-09 07:45:39.847494 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-notification 24.0.1.20260328 a0d400ce4fdd 11 days ago 996MB 2026-04-09 07:45:39.847505 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-driver-agent 16.0.2.20260328 f07869d78758 11 days ago 1.07GB 2026-04-09 07:45:39.847516 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-api 16.0.2.20260328 7118289a0d17 11 days ago 1.07GB 2026-04-09 07:45:39.847527 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-worker 16.0.2.20260328 1065bc696018 11 days ago 1.05GB 2026-04-09 07:45:39.847537 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-health-manager 16.0.2.20260328 0adbcb202c49 11 days ago 1.05GB 2026-04-09 07:45:39.847548 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-housekeeping 16.0.2.20260328 1e4a4601f94f 11 days ago 1.05GB 2026-04-09 07:45:39.847559 | orchestrator | registry.osism.tech/kolla/release/2025.1/placement-api 13.0.0.20260328 b52f42ecbb4d 11 days ago 996MB 2026-04-09 07:45:39.847570 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-listener 20.0.0.20260328 afbc43250d60 11 days ago 995MB 2026-04-09 07:45:39.847580 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-evaluator 20.0.0.20260328 26d81adaeaae 11 days ago 995MB 2026-04-09 07:45:39.847591 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-notifier 20.0.0.20260328 aa74bb4c136d 11 days ago 995MB 2026-04-09 07:45:39.847602 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-api 20.0.0.20260328 bb920611ad39 11 days ago 994MB 2026-04-09 07:45:39.847612 | orchestrator | registry.osism.tech/kolla/release/2025.1/glance-api 30.1.1.20260328 525bb863082d 11 days ago 1.12GB 2026-04-09 07:45:39.847623 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-volume 26.2.1.20260328 78cc3d4efb57 11 days ago 1.79GB 2026-04-09 07:45:39.847634 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-scheduler 26.2.1.20260328 b72d2e7568f8 11 days ago 1.43GB 2026-04-09 07:45:39.847644 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-api 26.2.1.20260328 2583a0d99734 11 days ago 1.43GB 2026-04-09 07:45:39.847666 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-backup 26.2.1.20260328 a970df3ae580 11 days ago 1.44GB 2026-04-09 07:45:39.847676 | orchestrator | registry.osism.tech/kolla/release/2025.1/neutron-server 26.0.3.20260328 b084449c71f7 11 days ago 1.24GB 2026-04-09 07:45:39.847687 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-console 6.0.1.20260328 cf9981ab1a70 11 days ago 1.07GB 2026-04-09 07:45:39.847697 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-apiserver 6.0.1.20260328 d52b28f7bdf2 11 days ago 1.02GB 2026-04-09 07:45:39.847708 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-worker 20.0.1.20260328 10c316f8a88d 11 days ago 1GB 2026-04-09 07:45:39.847719 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener 20.0.1.20260328 f1c21f7912dc 11 days ago 1GB 2026-04-09 07:45:39.847729 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-api 20.0.1.20260328 43f0933a84ab 11 days ago 1GB 2026-04-09 07:45:39.847740 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-conductor 20.0.2.20260328 8cf236db44c6 11 days ago 1.27GB 2026-04-09 07:45:39.847751 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-api 20.0.2.20260328 9a37ca6883b8 11 days ago 1.15GB 2026-04-09 07:45:39.847762 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-backend-bind9 20.0.1.20260328 bc68ee83deb0 11 days ago 1.01GB 2026-04-09 07:45:39.847773 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-api 20.0.1.20260328 c0c239664d22 11 days ago 1GB 2026-04-09 07:45:39.847784 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-mdns 20.0.1.20260328 c268b1854421 11 days ago 1GB 2026-04-09 07:45:39.847794 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-worker 20.0.1.20260328 3ce3202d2f8d 11 days ago 1.01GB 2026-04-09 07:45:39.847805 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-central 20.0.1.20260328 50fabfae16b4 11 days ago 1GB 2026-04-09 07:45:39.847816 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-producer 20.0.1.20260328 23baf4bae3a6 11 days ago 1GB 2026-04-09 07:45:39.847834 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-api 31.2.1.20260328 7100cf172da2 11 days ago 1.23GB 2026-04-09 07:45:39.847855 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-novncproxy 31.2.1.20260328 003749dfd921 11 days ago 1.39GB 2026-04-09 07:45:39.847867 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-scheduler 31.2.1.20260328 0b8714cecfd8 11 days ago 1.23GB 2026-04-09 07:45:39.847878 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-conductor 31.2.1.20260328 d35210169004 11 days ago 1.23GB 2026-04-09 07:45:39.847889 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-data 20.0.2.20260328 5c1ce4fd1849 11 days ago 1.07GB 2026-04-09 07:45:39.847900 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-scheduler 20.0.2.20260328 29e4081372f9 11 days ago 1.07GB 2026-04-09 07:45:39.847911 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-api 20.0.2.20260328 949d0dfdab5b 11 days ago 1.07GB 2026-04-09 07:45:39.847921 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-share 20.0.2.20260328 d5693cb24e6d 11 days ago 1.24GB 2026-04-09 07:45:39.847932 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay 25.3.1.20260328 08ae9a102f53 11 days ago 301MB 2026-04-09 07:45:39.847943 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-09 07:45:39.847961 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-09 07:45:39.847973 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-09 07:45:39.847983 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-09 07:45:39.847995 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-09 07:45:39.848006 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-09 07:45:39.848017 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-09 07:45:39.848028 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-09 07:45:39.848038 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-09 07:45:39.848049 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-09 07:45:39.848060 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-09 07:45:39.848071 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-09 07:45:39.848226 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-09 07:45:39.848247 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-09 07:45:39.848259 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-09 07:45:39.848270 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-09 07:45:39.848281 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-09 07:45:39.848291 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-09 07:45:39.848302 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-09 07:45:39.848313 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-09 07:45:39.848324 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-09 07:45:39.848334 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-09 07:45:39.848345 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-09 07:45:39.848356 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-09 07:45:39.848367 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-09 07:45:39.848378 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-09 07:45:39.848396 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-09 07:45:39.848407 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-09 07:45:39.848418 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-09 07:45:39.848429 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-09 07:45:39.848439 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-09 07:45:39.848472 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-09 07:45:39.848484 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-09 07:45:39.848495 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-09 07:45:39.848505 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-09 07:45:39.848516 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-09 07:45:39.848527 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-09 07:45:39.848538 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-09 07:45:39.848549 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-09 07:45:39.848560 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-09 07:45:39.848570 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-09 07:45:39.848581 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-09 07:45:39.848592 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-09 07:45:39.848609 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-09 07:45:39.848625 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-09 07:45:39.848637 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-09 07:45:39.848648 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-09 07:45:39.848659 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-09 07:45:39.848670 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-09 07:45:39.848680 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-09 07:45:39.848691 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-09 07:45:39.848709 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-09 07:45:39.848720 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-09 07:45:39.848730 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-09 07:45:39.848741 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-09 07:45:39.848752 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-09 07:45:39.848763 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-09 07:45:39.848774 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-09 07:45:39.848785 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-09 07:45:39.848796 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-09 07:45:39.848807 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-09 07:45:39.848818 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-09 07:45:39.848829 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-09 07:45:39.848840 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-09 07:45:39.848851 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-09 07:45:39.848862 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-09 07:45:39.848873 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-09 07:45:39.848883 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-09 07:45:39.848895 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-09 07:45:39.991009 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-04-09 07:45:40.000616 | orchestrator | + set -e 2026-04-09 07:45:40.000685 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 07:45:40.000695 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 07:45:40.000703 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 07:45:40.000710 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-09 07:45:40.000716 | orchestrator | ++ CEPH_VERSION=reef 2026-04-09 07:45:40.000723 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 07:45:40.000732 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 07:45:40.000738 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-09 07:45:40.000746 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-09 07:45:40.000753 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-09 07:45:40.000759 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-09 07:45:40.000766 | orchestrator | ++ export ARA=false 2026-04-09 07:45:40.000773 | orchestrator | ++ ARA=false 2026-04-09 07:45:40.000780 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 07:45:40.000787 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 07:45:40.000793 | orchestrator | ++ export TEMPEST=false 2026-04-09 07:45:40.000800 | orchestrator | ++ TEMPEST=false 2026-04-09 07:45:40.000807 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 07:45:40.000813 | orchestrator | ++ IS_ZUUL=true 2026-04-09 07:45:40.000820 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 07:45:40.000846 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 07:45:40.000853 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 07:45:40.000860 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 07:45:40.000866 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 07:45:40.000873 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 07:45:40.000880 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 07:45:40.000887 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 07:45:40.000893 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 07:45:40.000900 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 07:45:40.000906 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-09 07:45:40.000913 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-09 07:45:40.000919 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-09 07:45:40.000926 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-04-09 07:45:40.007731 | orchestrator | + set -e 2026-04-09 07:45:40.007792 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 07:45:40.007801 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 07:45:40.007809 | orchestrator | ++ INTERACTIVE=false 2026-04-09 07:45:40.007816 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 07:45:40.007823 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 07:45:40.007829 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-09 07:45:40.008558 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-09 07:45:40.012413 | orchestrator | 2026-04-09 07:45:40.012485 | orchestrator | # Ceph status 2026-04-09 07:45:40.012500 | orchestrator | 2026-04-09 07:45:40.012512 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-09 07:45:40.012524 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-09 07:45:40.012535 | orchestrator | + echo 2026-04-09 07:45:40.012547 | orchestrator | + echo '# Ceph status' 2026-04-09 07:45:40.012558 | orchestrator | + echo 2026-04-09 07:45:40.012569 | orchestrator | + ceph -s 2026-04-09 07:45:40.689939 | orchestrator | cluster: 2026-04-09 07:45:40.690103 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-04-09 07:45:40.690133 | orchestrator | health: HEALTH_OK 2026-04-09 07:45:40.690153 | orchestrator | 2026-04-09 07:45:40.690217 | orchestrator | services: 2026-04-09 07:45:40.690241 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 2h) 2026-04-09 07:45:40.690278 | orchestrator | mgr: testbed-node-0(active, since 2h), standbys: testbed-node-1, testbed-node-2 2026-04-09 07:45:40.690299 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-04-09 07:45:40.690313 | orchestrator | osd: 6 osds: 6 up (since 106m), 6 in (since 4h) 2026-04-09 07:45:40.690324 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-04-09 07:45:40.690335 | orchestrator | 2026-04-09 07:45:40.690346 | orchestrator | data: 2026-04-09 07:45:40.690357 | orchestrator | volumes: 1/1 healthy 2026-04-09 07:45:40.690368 | orchestrator | pools: 14 pools, 401 pgs 2026-04-09 07:45:40.690379 | orchestrator | objects: 819 objects, 2.8 GiB 2026-04-09 07:45:40.690390 | orchestrator | usage: 7.9 GiB used, 112 GiB / 120 GiB avail 2026-04-09 07:45:40.690401 | orchestrator | pgs: 401 active+clean 2026-04-09 07:45:40.690412 | orchestrator | 2026-04-09 07:45:40.690422 | orchestrator | io: 2026-04-09 07:45:40.690433 | orchestrator | client: 1.3 KiB/s rd, 1 op/s rd, 0 op/s wr 2026-04-09 07:45:40.690506 | orchestrator | 2026-04-09 07:45:40.745882 | orchestrator | 2026-04-09 07:45:40.745990 | orchestrator | # Ceph versions 2026-04-09 07:45:40.746075 | orchestrator | 2026-04-09 07:45:40.746099 | orchestrator | + echo 2026-04-09 07:45:40.746115 | orchestrator | + echo '# Ceph versions' 2026-04-09 07:45:40.746132 | orchestrator | + echo 2026-04-09 07:45:40.746149 | orchestrator | + ceph versions 2026-04-09 07:45:41.346362 | orchestrator | { 2026-04-09 07:45:41.346517 | orchestrator | "mon": { 2026-04-09 07:45:41.346546 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-09 07:45:41.347272 | orchestrator | }, 2026-04-09 07:45:41.347315 | orchestrator | "mgr": { 2026-04-09 07:45:41.347325 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-09 07:45:41.347334 | orchestrator | }, 2026-04-09 07:45:41.347343 | orchestrator | "osd": { 2026-04-09 07:45:41.347352 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-04-09 07:45:41.347360 | orchestrator | }, 2026-04-09 07:45:41.347369 | orchestrator | "mds": { 2026-04-09 07:45:41.347378 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-09 07:45:41.347410 | orchestrator | }, 2026-04-09 07:45:41.347419 | orchestrator | "rgw": { 2026-04-09 07:45:41.347427 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-09 07:45:41.347436 | orchestrator | }, 2026-04-09 07:45:41.347471 | orchestrator | "overall": { 2026-04-09 07:45:41.347482 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-04-09 07:45:41.347491 | orchestrator | } 2026-04-09 07:45:41.347500 | orchestrator | } 2026-04-09 07:45:41.391561 | orchestrator | 2026-04-09 07:45:41.391624 | orchestrator | # Ceph OSD tree 2026-04-09 07:45:41.391630 | orchestrator | 2026-04-09 07:45:41.391636 | orchestrator | + echo 2026-04-09 07:45:41.391640 | orchestrator | + echo '# Ceph OSD tree' 2026-04-09 07:45:41.391645 | orchestrator | + echo 2026-04-09 07:45:41.391649 | orchestrator | + ceph osd df tree 2026-04-09 07:45:41.928575 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-04-09 07:45:41.928684 | orchestrator | -1 0.11691 - 120 GiB 7.9 GiB 7.6 GiB 46 KiB 318 MiB 112 GiB 6.62 1.00 - root default 2026-04-09 07:45:41.928700 | orchestrator | -5 0.03897 - 40 GiB 2.6 GiB 2.5 GiB 15 KiB 100 MiB 37 GiB 6.61 1.00 - host testbed-node-3 2026-04-09 07:45:41.928763 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 6 KiB 50 MiB 19 GiB 5.53 0.83 176 up osd.1 2026-04-09 07:45:41.928776 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.5 GiB 9 KiB 50 MiB 18 GiB 7.69 1.16 216 up osd.3 2026-04-09 07:45:41.928788 | orchestrator | -3 0.03897 - 40 GiB 2.6 GiB 2.5 GiB 15 KiB 100 MiB 37 GiB 6.61 1.00 - host testbed-node-4 2026-04-09 07:45:41.928799 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.6 GiB 1.5 GiB 7 KiB 46 MiB 18 GiB 7.91 1.20 200 up osd.0 2026-04-09 07:45:41.928810 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 8 KiB 54 MiB 19 GiB 5.30 0.80 190 up osd.4 2026-04-09 07:45:41.928821 | orchestrator | -7 0.03897 - 40 GiB 2.7 GiB 2.5 GiB 16 KiB 117 MiB 37 GiB 6.65 1.00 - host testbed-node-5 2026-04-09 07:45:41.928832 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 7 KiB 58 MiB 18 GiB 7.44 1.12 191 up osd.2 2026-04-09 07:45:41.928843 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 9 KiB 58 MiB 19 GiB 5.86 0.89 197 up osd.5 2026-04-09 07:45:41.928853 | orchestrator | TOTAL 120 GiB 7.9 GiB 7.6 GiB 48 KiB 318 MiB 112 GiB 6.62 2026-04-09 07:45:41.928864 | orchestrator | MIN/MAX VAR: 0.80/1.20 STDDEV: 1.08 2026-04-09 07:45:41.981664 | orchestrator | 2026-04-09 07:45:41.981748 | orchestrator | # Ceph monitor status 2026-04-09 07:45:41.981762 | orchestrator | 2026-04-09 07:45:41.981774 | orchestrator | + echo 2026-04-09 07:45:41.981785 | orchestrator | + echo '# Ceph monitor status' 2026-04-09 07:45:41.981797 | orchestrator | + echo 2026-04-09 07:45:41.981808 | orchestrator | + ceph mon stat 2026-04-09 07:45:42.571686 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.8:3300/0,v1:192.168.16.8:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 38, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-04-09 07:45:42.615837 | orchestrator | 2026-04-09 07:45:42.615953 | orchestrator | # Ceph quorum status 2026-04-09 07:45:42.615979 | orchestrator | 2026-04-09 07:45:42.616250 | orchestrator | + echo 2026-04-09 07:45:42.616287 | orchestrator | + echo '# Ceph quorum status' 2026-04-09 07:45:42.616306 | orchestrator | + echo 2026-04-09 07:45:42.616342 | orchestrator | + ceph quorum_status 2026-04-09 07:45:42.616363 | orchestrator | + jq 2026-04-09 07:45:43.256516 | orchestrator | { 2026-04-09 07:45:43.256636 | orchestrator | "election_epoch": 38, 2026-04-09 07:45:43.256664 | orchestrator | "quorum": [ 2026-04-09 07:45:43.256684 | orchestrator | 0, 2026-04-09 07:45:43.256702 | orchestrator | 1, 2026-04-09 07:45:43.256718 | orchestrator | 2 2026-04-09 07:45:43.256734 | orchestrator | ], 2026-04-09 07:45:43.256744 | orchestrator | "quorum_names": [ 2026-04-09 07:45:43.256754 | orchestrator | "testbed-node-0", 2026-04-09 07:45:43.256788 | orchestrator | "testbed-node-1", 2026-04-09 07:45:43.256798 | orchestrator | "testbed-node-2" 2026-04-09 07:45:43.256808 | orchestrator | ], 2026-04-09 07:45:43.256818 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-04-09 07:45:43.256830 | orchestrator | "quorum_age": 8178, 2026-04-09 07:45:43.256840 | orchestrator | "features": { 2026-04-09 07:45:43.256850 | orchestrator | "quorum_con": "4540138322906710015", 2026-04-09 07:45:43.256859 | orchestrator | "quorum_mon": [ 2026-04-09 07:45:43.256869 | orchestrator | "kraken", 2026-04-09 07:45:43.256879 | orchestrator | "luminous", 2026-04-09 07:45:43.256889 | orchestrator | "mimic", 2026-04-09 07:45:43.256898 | orchestrator | "osdmap-prune", 2026-04-09 07:45:43.256908 | orchestrator | "nautilus", 2026-04-09 07:45:43.256917 | orchestrator | "octopus", 2026-04-09 07:45:43.256927 | orchestrator | "pacific", 2026-04-09 07:45:43.256937 | orchestrator | "elector-pinging", 2026-04-09 07:45:43.256946 | orchestrator | "quincy", 2026-04-09 07:45:43.256956 | orchestrator | "reef" 2026-04-09 07:45:43.256966 | orchestrator | ] 2026-04-09 07:45:43.256975 | orchestrator | }, 2026-04-09 07:45:43.256985 | orchestrator | "monmap": { 2026-04-09 07:45:43.256995 | orchestrator | "epoch": 1, 2026-04-09 07:45:43.257004 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-04-09 07:45:43.257015 | orchestrator | "modified": "2026-04-09T02:57:31.386456Z", 2026-04-09 07:45:43.257025 | orchestrator | "created": "2026-04-09T02:57:31.386456Z", 2026-04-09 07:45:43.257035 | orchestrator | "min_mon_release": 18, 2026-04-09 07:45:43.257045 | orchestrator | "min_mon_release_name": "reef", 2026-04-09 07:45:43.257055 | orchestrator | "election_strategy": 1, 2026-04-09 07:45:43.257064 | orchestrator | "disallowed_leaders: ": "", 2026-04-09 07:45:43.257074 | orchestrator | "stretch_mode": false, 2026-04-09 07:45:43.257084 | orchestrator | "tiebreaker_mon": "", 2026-04-09 07:45:43.257093 | orchestrator | "removed_ranks: ": "", 2026-04-09 07:45:43.257105 | orchestrator | "features": { 2026-04-09 07:45:43.257116 | orchestrator | "persistent": [ 2026-04-09 07:45:43.257127 | orchestrator | "kraken", 2026-04-09 07:45:43.257139 | orchestrator | "luminous", 2026-04-09 07:45:43.257149 | orchestrator | "mimic", 2026-04-09 07:45:43.257160 | orchestrator | "osdmap-prune", 2026-04-09 07:45:43.257171 | orchestrator | "nautilus", 2026-04-09 07:45:43.257182 | orchestrator | "octopus", 2026-04-09 07:45:43.257193 | orchestrator | "pacific", 2026-04-09 07:45:43.257203 | orchestrator | "elector-pinging", 2026-04-09 07:45:43.257215 | orchestrator | "quincy", 2026-04-09 07:45:43.257226 | orchestrator | "reef" 2026-04-09 07:45:43.257237 | orchestrator | ], 2026-04-09 07:45:43.257248 | orchestrator | "optional": [] 2026-04-09 07:45:43.257259 | orchestrator | }, 2026-04-09 07:45:43.257270 | orchestrator | "mons": [ 2026-04-09 07:45:43.257281 | orchestrator | { 2026-04-09 07:45:43.257292 | orchestrator | "rank": 0, 2026-04-09 07:45:43.257303 | orchestrator | "name": "testbed-node-0", 2026-04-09 07:45:43.257314 | orchestrator | "public_addrs": { 2026-04-09 07:45:43.257326 | orchestrator | "addrvec": [ 2026-04-09 07:45:43.257336 | orchestrator | { 2026-04-09 07:45:43.257347 | orchestrator | "type": "v2", 2026-04-09 07:45:43.257359 | orchestrator | "addr": "192.168.16.8:3300", 2026-04-09 07:45:43.257370 | orchestrator | "nonce": 0 2026-04-09 07:45:43.257381 | orchestrator | }, 2026-04-09 07:45:43.257393 | orchestrator | { 2026-04-09 07:45:43.257404 | orchestrator | "type": "v1", 2026-04-09 07:45:43.257414 | orchestrator | "addr": "192.168.16.8:6789", 2026-04-09 07:45:43.257423 | orchestrator | "nonce": 0 2026-04-09 07:45:43.257433 | orchestrator | } 2026-04-09 07:45:43.257469 | orchestrator | ] 2026-04-09 07:45:43.257483 | orchestrator | }, 2026-04-09 07:45:43.257493 | orchestrator | "addr": "192.168.16.8:6789/0", 2026-04-09 07:45:43.257502 | orchestrator | "public_addr": "192.168.16.8:6789/0", 2026-04-09 07:45:43.257512 | orchestrator | "priority": 0, 2026-04-09 07:45:43.257521 | orchestrator | "weight": 0, 2026-04-09 07:45:43.257531 | orchestrator | "crush_location": "{}" 2026-04-09 07:45:43.257541 | orchestrator | }, 2026-04-09 07:45:43.257550 | orchestrator | { 2026-04-09 07:45:43.257564 | orchestrator | "rank": 1, 2026-04-09 07:45:43.257581 | orchestrator | "name": "testbed-node-1", 2026-04-09 07:45:43.257597 | orchestrator | "public_addrs": { 2026-04-09 07:45:43.257613 | orchestrator | "addrvec": [ 2026-04-09 07:45:43.257629 | orchestrator | { 2026-04-09 07:45:43.257643 | orchestrator | "type": "v2", 2026-04-09 07:45:43.257660 | orchestrator | "addr": "192.168.16.11:3300", 2026-04-09 07:45:43.257693 | orchestrator | "nonce": 0 2026-04-09 07:45:43.257710 | orchestrator | }, 2026-04-09 07:45:43.257728 | orchestrator | { 2026-04-09 07:45:43.257743 | orchestrator | "type": "v1", 2026-04-09 07:45:43.257760 | orchestrator | "addr": "192.168.16.11:6789", 2026-04-09 07:45:43.257776 | orchestrator | "nonce": 0 2026-04-09 07:45:43.257793 | orchestrator | } 2026-04-09 07:45:43.257809 | orchestrator | ] 2026-04-09 07:45:43.257826 | orchestrator | }, 2026-04-09 07:45:43.257841 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-04-09 07:45:43.257857 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-04-09 07:45:43.257874 | orchestrator | "priority": 0, 2026-04-09 07:45:43.257890 | orchestrator | "weight": 0, 2026-04-09 07:45:43.257906 | orchestrator | "crush_location": "{}" 2026-04-09 07:45:43.257922 | orchestrator | }, 2026-04-09 07:45:43.257939 | orchestrator | { 2026-04-09 07:45:43.257955 | orchestrator | "rank": 2, 2026-04-09 07:45:43.257972 | orchestrator | "name": "testbed-node-2", 2026-04-09 07:45:43.257989 | orchestrator | "public_addrs": { 2026-04-09 07:45:43.258006 | orchestrator | "addrvec": [ 2026-04-09 07:45:43.258086 | orchestrator | { 2026-04-09 07:45:43.258104 | orchestrator | "type": "v2", 2026-04-09 07:45:43.258121 | orchestrator | "addr": "192.168.16.12:3300", 2026-04-09 07:45:43.258136 | orchestrator | "nonce": 0 2026-04-09 07:45:43.258152 | orchestrator | }, 2026-04-09 07:45:43.258162 | orchestrator | { 2026-04-09 07:45:43.258171 | orchestrator | "type": "v1", 2026-04-09 07:45:43.258181 | orchestrator | "addr": "192.168.16.12:6789", 2026-04-09 07:45:43.258190 | orchestrator | "nonce": 0 2026-04-09 07:45:43.258200 | orchestrator | } 2026-04-09 07:45:43.258209 | orchestrator | ] 2026-04-09 07:45:43.258218 | orchestrator | }, 2026-04-09 07:45:43.258228 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-04-09 07:45:43.258238 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-04-09 07:45:43.258247 | orchestrator | "priority": 0, 2026-04-09 07:45:43.258257 | orchestrator | "weight": 0, 2026-04-09 07:45:43.258266 | orchestrator | "crush_location": "{}" 2026-04-09 07:45:43.258276 | orchestrator | } 2026-04-09 07:45:43.258285 | orchestrator | ] 2026-04-09 07:45:43.258299 | orchestrator | } 2026-04-09 07:45:43.258316 | orchestrator | } 2026-04-09 07:45:43.258349 | orchestrator | 2026-04-09 07:45:43.258367 | orchestrator | # Ceph free space status 2026-04-09 07:45:43.258383 | orchestrator | + echo 2026-04-09 07:45:43.258400 | orchestrator | + echo '# Ceph free space status' 2026-04-09 07:45:43.258417 | orchestrator | + echo 2026-04-09 07:45:43.258432 | orchestrator | 2026-04-09 07:45:43.258485 | orchestrator | + ceph df 2026-04-09 07:45:43.849539 | orchestrator | --- RAW STORAGE --- 2026-04-09 07:45:43.849642 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-04-09 07:45:43.849672 | orchestrator | hdd 120 GiB 112 GiB 7.9 GiB 7.9 GiB 6.62 2026-04-09 07:45:43.849685 | orchestrator | TOTAL 120 GiB 112 GiB 7.9 GiB 7.9 GiB 6.62 2026-04-09 07:45:43.849696 | orchestrator | 2026-04-09 07:45:43.849708 | orchestrator | --- POOLS --- 2026-04-09 07:45:43.849720 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-04-09 07:45:43.849732 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-04-09 07:45:43.849743 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-04-09 07:45:43.849754 | orchestrator | cephfs_metadata 3 16 12 KiB 22 118 KiB 0 35 GiB 2026-04-09 07:45:43.849765 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-04-09 07:45:43.849775 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-04-09 07:45:43.849786 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-04-09 07:45:43.849796 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-04-09 07:45:43.849807 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-04-09 07:45:43.849831 | orchestrator | .rgw.root 9 32 2.6 KiB 6 48 KiB 0 52 GiB 2026-04-09 07:45:43.849842 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-04-09 07:45:43.849852 | orchestrator | volumes 11 32 325 MiB 267 974 MiB 0.90 35 GiB 2026-04-09 07:45:43.849884 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.99 35 GiB 2026-04-09 07:45:43.849896 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-04-09 07:45:43.849906 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-04-09 07:45:43.892979 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-09 07:45:43.946739 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-09 07:45:43.946814 | orchestrator | + osism apply facts 2026-04-09 07:45:45.265903 | orchestrator | 2026-04-09 07:45:45 | INFO  | Prepare task for execution of facts. 2026-04-09 07:45:45.349016 | orchestrator | 2026-04-09 07:45:45 | INFO  | Task 9712318b-66b0-40cd-b355-0c438e267db4 (facts) was prepared for execution. 2026-04-09 07:45:45.349085 | orchestrator | 2026-04-09 07:45:45 | INFO  | It takes a moment until task 9712318b-66b0-40cd-b355-0c438e267db4 (facts) has been started and output is visible here. 2026-04-09 07:46:03.740870 | orchestrator | 2026-04-09 07:46:03.740981 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-09 07:46:03.740998 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-09 07:46:03.741010 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-09 07:46:03.741031 | orchestrator | 2026-04-09 07:46:03.741042 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-09 07:46:03.741052 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-09 07:46:03.741062 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-09 07:46:03.741082 | orchestrator | Thursday 09 April 2026 07:45:50 +0000 (0:00:01.881) 0:00:01.881 ******** 2026-04-09 07:46:03.741092 | orchestrator | ok: [testbed-manager] 2026-04-09 07:46:03.741103 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:46:03.741113 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:46:03.741122 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:46:03.741132 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:46:03.741141 | orchestrator | ok: [testbed-node-4] 2026-04-09 07:46:03.741151 | orchestrator | ok: [testbed-node-5] 2026-04-09 07:46:03.741161 | orchestrator | 2026-04-09 07:46:03.741171 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-09 07:46:03.741181 | orchestrator | Thursday 09 April 2026 07:45:52 +0000 (0:00:01.902) 0:00:03.784 ******** 2026-04-09 07:46:03.741190 | orchestrator | skipping: [testbed-manager] 2026-04-09 07:46:03.741200 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:46:03.741210 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:46:03.741219 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:46:03.741229 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:46:03.741238 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:46:03.741248 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:46:03.741258 | orchestrator | 2026-04-09 07:46:03.741267 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-09 07:46:03.741277 | orchestrator | 2026-04-09 07:46:03.741287 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 07:46:03.741297 | orchestrator | Thursday 09 April 2026 07:45:54 +0000 (0:00:01.943) 0:00:05.727 ******** 2026-04-09 07:46:03.741306 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:46:03.741316 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:46:03.741326 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:46:03.741336 | orchestrator | ok: [testbed-manager] 2026-04-09 07:46:03.741345 | orchestrator | ok: [testbed-node-5] 2026-04-09 07:46:03.741355 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:46:03.741364 | orchestrator | ok: [testbed-node-4] 2026-04-09 07:46:03.741374 | orchestrator | 2026-04-09 07:46:03.741384 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-09 07:46:03.741419 | orchestrator | 2026-04-09 07:46:03.741459 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-09 07:46:03.741472 | orchestrator | Thursday 09 April 2026 07:46:01 +0000 (0:00:06.845) 0:00:12.573 ******** 2026-04-09 07:46:03.741483 | orchestrator | skipping: [testbed-manager] 2026-04-09 07:46:03.741494 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:46:03.741506 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:46:03.741517 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:46:03.741529 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:46:03.741540 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:46:03.741551 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:46:03.741562 | orchestrator | 2026-04-09 07:46:03.741574 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 07:46:03.741585 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 07:46:03.741598 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 07:46:03.741609 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 07:46:03.741620 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 07:46:03.741632 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 07:46:03.741658 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 07:46:03.741670 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 07:46:03.741681 | orchestrator | 2026-04-09 07:46:03.741691 | orchestrator | 2026-04-09 07:46:03.741703 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 07:46:03.741714 | orchestrator | Thursday 09 April 2026 07:46:03 +0000 (0:00:01.742) 0:00:14.316 ******** 2026-04-09 07:46:03.741726 | orchestrator | =============================================================================== 2026-04-09 07:46:03.741737 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.85s 2026-04-09 07:46:03.741749 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.94s 2026-04-09 07:46:03.741758 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.90s 2026-04-09 07:46:03.741768 | orchestrator | Gather facts for all hosts ---------------------------------------------- 1.74s 2026-04-09 07:46:03.940830 | orchestrator | + osism validate ceph-mons 2026-04-09 07:47:14.095901 | orchestrator | 2026-04-09 07:47:14.095991 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-04-09 07:47:14.096001 | orchestrator | 2026-04-09 07:47:14.096009 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-09 07:47:14.096016 | orchestrator | Thursday 09 April 2026 07:46:20 +0000 (0:00:01.816) 0:00:01.816 ******** 2026-04-09 07:47:14.096023 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 07:47:14.096029 | orchestrator | 2026-04-09 07:47:14.096035 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-09 07:47:14.096041 | orchestrator | Thursday 09 April 2026 07:46:23 +0000 (0:00:02.745) 0:00:04.562 ******** 2026-04-09 07:47:14.096047 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 07:47:14.096053 | orchestrator | 2026-04-09 07:47:14.096059 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-09 07:47:14.096065 | orchestrator | Thursday 09 April 2026 07:46:25 +0000 (0:00:01.729) 0:00:06.291 ******** 2026-04-09 07:47:14.096089 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:47:14.096096 | orchestrator | 2026-04-09 07:47:14.096102 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-09 07:47:14.096108 | orchestrator | Thursday 09 April 2026 07:46:26 +0000 (0:00:01.207) 0:00:07.499 ******** 2026-04-09 07:47:14.096114 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:47:14.096120 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:47:14.096127 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:47:14.096132 | orchestrator | 2026-04-09 07:47:14.096139 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-09 07:47:14.096145 | orchestrator | Thursday 09 April 2026 07:46:28 +0000 (0:00:01.740) 0:00:09.239 ******** 2026-04-09 07:47:14.096151 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:47:14.096156 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:47:14.096162 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:47:14.096168 | orchestrator | 2026-04-09 07:47:14.096174 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-09 07:47:14.096180 | orchestrator | Thursday 09 April 2026 07:46:30 +0000 (0:00:02.698) 0:00:11.938 ******** 2026-04-09 07:47:14.096186 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:47:14.096192 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:47:14.096198 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:47:14.096204 | orchestrator | 2026-04-09 07:47:14.096210 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-09 07:47:14.096216 | orchestrator | Thursday 09 April 2026 07:46:32 +0000 (0:00:01.383) 0:00:13.322 ******** 2026-04-09 07:47:14.096222 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:47:14.096228 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:47:14.096234 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:47:14.096240 | orchestrator | 2026-04-09 07:47:14.096245 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 07:47:14.096251 | orchestrator | Thursday 09 April 2026 07:46:33 +0000 (0:00:01.323) 0:00:14.645 ******** 2026-04-09 07:47:14.096257 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:47:14.096263 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:47:14.096269 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:47:14.096275 | orchestrator | 2026-04-09 07:47:14.096280 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-04-09 07:47:14.096286 | orchestrator | Thursday 09 April 2026 07:46:34 +0000 (0:00:01.398) 0:00:16.043 ******** 2026-04-09 07:47:14.096292 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:47:14.096298 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:47:14.096304 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:47:14.096310 | orchestrator | 2026-04-09 07:47:14.096316 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-04-09 07:47:14.096322 | orchestrator | Thursday 09 April 2026 07:46:36 +0000 (0:00:01.353) 0:00:17.397 ******** 2026-04-09 07:47:14.096328 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:47:14.096334 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:47:14.096340 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:47:14.096345 | orchestrator | 2026-04-09 07:47:14.096351 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-09 07:47:14.096357 | orchestrator | Thursday 09 April 2026 07:46:37 +0000 (0:00:01.387) 0:00:18.785 ******** 2026-04-09 07:47:14.096363 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:47:14.096369 | orchestrator | 2026-04-09 07:47:14.096375 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-09 07:47:14.096419 | orchestrator | Thursday 09 April 2026 07:46:38 +0000 (0:00:01.294) 0:00:20.079 ******** 2026-04-09 07:47:14.096426 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:47:14.096431 | orchestrator | 2026-04-09 07:47:14.096437 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-09 07:47:14.096443 | orchestrator | Thursday 09 April 2026 07:46:40 +0000 (0:00:01.275) 0:00:21.355 ******** 2026-04-09 07:47:14.096449 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:47:14.096455 | orchestrator | 2026-04-09 07:47:14.096467 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 07:47:14.096475 | orchestrator | Thursday 09 April 2026 07:46:41 +0000 (0:00:01.241) 0:00:22.597 ******** 2026-04-09 07:47:14.096482 | orchestrator | 2026-04-09 07:47:14.096489 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 07:47:14.096496 | orchestrator | Thursday 09 April 2026 07:46:41 +0000 (0:00:00.449) 0:00:23.046 ******** 2026-04-09 07:47:14.096503 | orchestrator | 2026-04-09 07:47:14.096510 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 07:47:14.096517 | orchestrator | Thursday 09 April 2026 07:46:42 +0000 (0:00:00.648) 0:00:23.695 ******** 2026-04-09 07:47:14.096524 | orchestrator | 2026-04-09 07:47:14.096531 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-09 07:47:14.096538 | orchestrator | Thursday 09 April 2026 07:46:43 +0000 (0:00:00.788) 0:00:24.483 ******** 2026-04-09 07:47:14.096545 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:47:14.096551 | orchestrator | 2026-04-09 07:47:14.096558 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-09 07:47:14.096565 | orchestrator | Thursday 09 April 2026 07:46:44 +0000 (0:00:01.341) 0:00:25.824 ******** 2026-04-09 07:47:14.096572 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:47:14.096579 | orchestrator | 2026-04-09 07:47:14.096598 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-04-09 07:47:14.096605 | orchestrator | Thursday 09 April 2026 07:46:46 +0000 (0:00:01.315) 0:00:27.140 ******** 2026-04-09 07:47:14.096612 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:47:14.096620 | orchestrator | 2026-04-09 07:47:14.096627 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-04-09 07:47:14.096634 | orchestrator | Thursday 09 April 2026 07:46:47 +0000 (0:00:01.131) 0:00:28.272 ******** 2026-04-09 07:47:14.096641 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:47:14.096648 | orchestrator | 2026-04-09 07:47:14.096655 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-04-09 07:47:14.096662 | orchestrator | Thursday 09 April 2026 07:46:49 +0000 (0:00:02.732) 0:00:31.004 ******** 2026-04-09 07:47:14.096669 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:47:14.096676 | orchestrator | 2026-04-09 07:47:14.096682 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-04-09 07:47:14.096689 | orchestrator | Thursday 09 April 2026 07:46:51 +0000 (0:00:01.466) 0:00:32.471 ******** 2026-04-09 07:47:14.096708 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:47:14.096716 | orchestrator | 2026-04-09 07:47:14.096723 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-04-09 07:47:14.096730 | orchestrator | Thursday 09 April 2026 07:46:52 +0000 (0:00:01.178) 0:00:33.649 ******** 2026-04-09 07:47:14.096737 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:47:14.096743 | orchestrator | 2026-04-09 07:47:14.096750 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-04-09 07:47:14.096757 | orchestrator | Thursday 09 April 2026 07:46:53 +0000 (0:00:01.342) 0:00:34.993 ******** 2026-04-09 07:47:14.096764 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:47:14.096771 | orchestrator | 2026-04-09 07:47:14.096777 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-04-09 07:47:14.096784 | orchestrator | Thursday 09 April 2026 07:46:55 +0000 (0:00:01.360) 0:00:36.353 ******** 2026-04-09 07:47:14.096791 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:47:14.096798 | orchestrator | 2026-04-09 07:47:14.096805 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-04-09 07:47:14.096812 | orchestrator | Thursday 09 April 2026 07:46:56 +0000 (0:00:01.099) 0:00:37.453 ******** 2026-04-09 07:47:14.096819 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:47:14.096825 | orchestrator | 2026-04-09 07:47:14.096831 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-04-09 07:47:14.096837 | orchestrator | Thursday 09 April 2026 07:46:57 +0000 (0:00:01.130) 0:00:38.583 ******** 2026-04-09 07:47:14.096850 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:47:14.096855 | orchestrator | 2026-04-09 07:47:14.096861 | orchestrator | TASK [Gather status data] ****************************************************** 2026-04-09 07:47:14.096867 | orchestrator | Thursday 09 April 2026 07:46:58 +0000 (0:00:01.147) 0:00:39.731 ******** 2026-04-09 07:47:14.096873 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:47:14.096879 | orchestrator | 2026-04-09 07:47:14.096885 | orchestrator | TASK [Set health test data] **************************************************** 2026-04-09 07:47:14.096891 | orchestrator | Thursday 09 April 2026 07:47:00 +0000 (0:00:02.321) 0:00:42.053 ******** 2026-04-09 07:47:14.096896 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:47:14.096902 | orchestrator | 2026-04-09 07:47:14.096908 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-04-09 07:47:14.096914 | orchestrator | Thursday 09 April 2026 07:47:02 +0000 (0:00:01.285) 0:00:43.338 ******** 2026-04-09 07:47:14.096920 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:47:14.096926 | orchestrator | 2026-04-09 07:47:14.096932 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-04-09 07:47:14.096937 | orchestrator | Thursday 09 April 2026 07:47:03 +0000 (0:00:01.119) 0:00:44.458 ******** 2026-04-09 07:47:14.096943 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:47:14.096949 | orchestrator | 2026-04-09 07:47:14.096955 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-04-09 07:47:14.096961 | orchestrator | Thursday 09 April 2026 07:47:04 +0000 (0:00:01.117) 0:00:45.575 ******** 2026-04-09 07:47:14.096967 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:47:14.096972 | orchestrator | 2026-04-09 07:47:14.096978 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-04-09 07:47:14.096984 | orchestrator | Thursday 09 April 2026 07:47:05 +0000 (0:00:01.177) 0:00:46.752 ******** 2026-04-09 07:47:14.096990 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:47:14.096996 | orchestrator | 2026-04-09 07:47:14.097002 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-09 07:47:14.097007 | orchestrator | Thursday 09 April 2026 07:47:06 +0000 (0:00:01.130) 0:00:47.883 ******** 2026-04-09 07:47:14.097013 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 07:47:14.097019 | orchestrator | 2026-04-09 07:47:14.097028 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-09 07:47:14.097034 | orchestrator | Thursday 09 April 2026 07:47:08 +0000 (0:00:01.264) 0:00:49.147 ******** 2026-04-09 07:47:14.097040 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:47:14.097046 | orchestrator | 2026-04-09 07:47:14.097052 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-09 07:47:14.097057 | orchestrator | Thursday 09 April 2026 07:47:09 +0000 (0:00:01.251) 0:00:50.399 ******** 2026-04-09 07:47:14.097063 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 07:47:14.097069 | orchestrator | 2026-04-09 07:47:14.097075 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-09 07:47:14.097081 | orchestrator | Thursday 09 April 2026 07:47:12 +0000 (0:00:02.925) 0:00:53.325 ******** 2026-04-09 07:47:14.097086 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 07:47:14.097092 | orchestrator | 2026-04-09 07:47:14.097099 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-09 07:47:14.097108 | orchestrator | Thursday 09 April 2026 07:47:13 +0000 (0:00:01.536) 0:00:54.861 ******** 2026-04-09 07:47:14.097118 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 07:47:14.097128 | orchestrator | 2026-04-09 07:47:14.097142 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 07:47:21.457089 | orchestrator | Thursday 09 April 2026 07:47:15 +0000 (0:00:01.281) 0:00:56.142 ******** 2026-04-09 07:47:21.457198 | orchestrator | 2026-04-09 07:47:21.457217 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 07:47:21.457229 | orchestrator | Thursday 09 April 2026 07:47:15 +0000 (0:00:00.449) 0:00:56.592 ******** 2026-04-09 07:47:21.457266 | orchestrator | 2026-04-09 07:47:21.457278 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 07:47:21.457289 | orchestrator | Thursday 09 April 2026 07:47:15 +0000 (0:00:00.435) 0:00:57.027 ******** 2026-04-09 07:47:21.457300 | orchestrator | 2026-04-09 07:47:21.457311 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-09 07:47:21.457323 | orchestrator | Thursday 09 April 2026 07:47:16 +0000 (0:00:00.810) 0:00:57.837 ******** 2026-04-09 07:47:21.457334 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 07:47:21.457345 | orchestrator | 2026-04-09 07:47:21.457356 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-09 07:47:21.457367 | orchestrator | Thursday 09 April 2026 07:47:19 +0000 (0:00:02.356) 0:01:00.194 ******** 2026-04-09 07:47:21.457444 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-09 07:47:21.457458 | orchestrator |  "msg": [ 2026-04-09 07:47:21.457471 | orchestrator |  "Validator run completed.", 2026-04-09 07:47:21.457483 | orchestrator |  "You can find the report file here:", 2026-04-09 07:47:21.457495 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-04-09T07:46:21+00:00-report.json", 2026-04-09 07:47:21.457508 | orchestrator |  "on the following host:", 2026-04-09 07:47:21.457519 | orchestrator |  "testbed-manager" 2026-04-09 07:47:21.457530 | orchestrator |  ] 2026-04-09 07:47:21.457541 | orchestrator | } 2026-04-09 07:47:21.457553 | orchestrator | 2026-04-09 07:47:21.457564 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 07:47:21.457576 | orchestrator | testbed-node-0 : ok=24  changed=4  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-09 07:47:21.457589 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 07:47:21.457600 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 07:47:21.457613 | orchestrator | 2026-04-09 07:47:21.457625 | orchestrator | 2026-04-09 07:47:21.457638 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 07:47:21.457650 | orchestrator | Thursday 09 April 2026 07:47:21 +0000 (0:00:01.998) 0:01:02.192 ******** 2026-04-09 07:47:21.457663 | orchestrator | =============================================================================== 2026-04-09 07:47:21.457676 | orchestrator | Aggregate test results step one ----------------------------------------- 2.93s 2026-04-09 07:47:21.457688 | orchestrator | Get timestamp for report file ------------------------------------------- 2.75s 2026-04-09 07:47:21.457701 | orchestrator | Get monmap info from one mon container ---------------------------------- 2.73s 2026-04-09 07:47:21.457713 | orchestrator | Get container info ------------------------------------------------------ 2.70s 2026-04-09 07:47:21.457725 | orchestrator | Write report file ------------------------------------------------------- 2.36s 2026-04-09 07:47:21.457738 | orchestrator | Gather status data ------------------------------------------------------ 2.32s 2026-04-09 07:47:21.457751 | orchestrator | Print report file information ------------------------------------------- 2.00s 2026-04-09 07:47:21.457763 | orchestrator | Flush handlers ---------------------------------------------------------- 1.89s 2026-04-09 07:47:21.457775 | orchestrator | Prepare test data for container existance test -------------------------- 1.74s 2026-04-09 07:47:21.457788 | orchestrator | Create report output directory ------------------------------------------ 1.73s 2026-04-09 07:47:21.457801 | orchestrator | Flush handlers ---------------------------------------------------------- 1.70s 2026-04-09 07:47:21.457813 | orchestrator | Aggregate test results step two ----------------------------------------- 1.54s 2026-04-09 07:47:21.457823 | orchestrator | Set quorum test data ---------------------------------------------------- 1.47s 2026-04-09 07:47:21.457843 | orchestrator | Prepare test data ------------------------------------------------------- 1.40s 2026-04-09 07:47:21.457870 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 1.39s 2026-04-09 07:47:21.457882 | orchestrator | Set test result to failed if container is missing ----------------------- 1.38s 2026-04-09 07:47:21.457892 | orchestrator | Set fsid test vars ------------------------------------------------------ 1.36s 2026-04-09 07:47:21.457903 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 1.35s 2026-04-09 07:47:21.457914 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 1.34s 2026-04-09 07:47:21.457925 | orchestrator | Print report file information ------------------------------------------- 1.34s 2026-04-09 07:47:21.651522 | orchestrator | + osism validate ceph-mgrs 2026-04-09 07:48:25.821987 | orchestrator | 2026-04-09 07:48:25.822186 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-04-09 07:48:25.822209 | orchestrator | 2026-04-09 07:48:25.822220 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-09 07:48:25.822230 | orchestrator | Thursday 09 April 2026 07:47:38 +0000 (0:00:01.829) 0:00:01.829 ******** 2026-04-09 07:48:25.822241 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 07:48:25.822251 | orchestrator | 2026-04-09 07:48:25.822260 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-09 07:48:25.822270 | orchestrator | Thursday 09 April 2026 07:47:41 +0000 (0:00:02.772) 0:00:04.601 ******** 2026-04-09 07:48:25.822280 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 07:48:25.822289 | orchestrator | 2026-04-09 07:48:25.822301 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-09 07:48:25.822319 | orchestrator | Thursday 09 April 2026 07:47:43 +0000 (0:00:01.757) 0:00:06.359 ******** 2026-04-09 07:48:25.822400 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:48:25.822426 | orchestrator | 2026-04-09 07:48:25.822445 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-09 07:48:25.822457 | orchestrator | Thursday 09 April 2026 07:47:44 +0000 (0:00:01.116) 0:00:07.475 ******** 2026-04-09 07:48:25.822467 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:48:25.822477 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:48:25.822486 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:48:25.822496 | orchestrator | 2026-04-09 07:48:25.822508 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-09 07:48:25.822520 | orchestrator | Thursday 09 April 2026 07:47:45 +0000 (0:00:01.733) 0:00:09.208 ******** 2026-04-09 07:48:25.822531 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:48:25.822542 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:48:25.822552 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:48:25.822563 | orchestrator | 2026-04-09 07:48:25.822574 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-09 07:48:25.822585 | orchestrator | Thursday 09 April 2026 07:47:48 +0000 (0:00:02.651) 0:00:11.860 ******** 2026-04-09 07:48:25.822597 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:48:25.822608 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:48:25.822619 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:48:25.822629 | orchestrator | 2026-04-09 07:48:25.822641 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-09 07:48:25.822652 | orchestrator | Thursday 09 April 2026 07:47:49 +0000 (0:00:01.350) 0:00:13.210 ******** 2026-04-09 07:48:25.822663 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:48:25.822675 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:48:25.822686 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:48:25.822697 | orchestrator | 2026-04-09 07:48:25.822709 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 07:48:25.822720 | orchestrator | Thursday 09 April 2026 07:47:51 +0000 (0:00:01.412) 0:00:14.623 ******** 2026-04-09 07:48:25.822731 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:48:25.822742 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:48:25.822753 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:48:25.822788 | orchestrator | 2026-04-09 07:48:25.822800 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-04-09 07:48:25.822811 | orchestrator | Thursday 09 April 2026 07:47:52 +0000 (0:00:01.340) 0:00:15.964 ******** 2026-04-09 07:48:25.822822 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:48:25.822840 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:48:25.822858 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:48:25.822874 | orchestrator | 2026-04-09 07:48:25.822890 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-04-09 07:48:25.822905 | orchestrator | Thursday 09 April 2026 07:47:53 +0000 (0:00:01.306) 0:00:17.270 ******** 2026-04-09 07:48:25.822920 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:48:25.822935 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:48:25.822951 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:48:25.822968 | orchestrator | 2026-04-09 07:48:25.822984 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-09 07:48:25.822994 | orchestrator | Thursday 09 April 2026 07:47:55 +0000 (0:00:01.332) 0:00:18.603 ******** 2026-04-09 07:48:25.823009 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:48:25.823025 | orchestrator | 2026-04-09 07:48:25.823042 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-09 07:48:25.823058 | orchestrator | Thursday 09 April 2026 07:47:56 +0000 (0:00:01.261) 0:00:19.865 ******** 2026-04-09 07:48:25.823073 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:48:25.823088 | orchestrator | 2026-04-09 07:48:25.823105 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-09 07:48:25.823122 | orchestrator | Thursday 09 April 2026 07:47:57 +0000 (0:00:01.279) 0:00:21.144 ******** 2026-04-09 07:48:25.823139 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:48:25.823155 | orchestrator | 2026-04-09 07:48:25.823169 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 07:48:25.823186 | orchestrator | Thursday 09 April 2026 07:47:59 +0000 (0:00:01.272) 0:00:22.417 ******** 2026-04-09 07:48:25.823203 | orchestrator | 2026-04-09 07:48:25.823218 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 07:48:25.823234 | orchestrator | Thursday 09 April 2026 07:47:59 +0000 (0:00:00.467) 0:00:22.884 ******** 2026-04-09 07:48:25.823251 | orchestrator | 2026-04-09 07:48:25.823267 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 07:48:25.823304 | orchestrator | Thursday 09 April 2026 07:48:00 +0000 (0:00:00.702) 0:00:23.587 ******** 2026-04-09 07:48:25.823321 | orchestrator | 2026-04-09 07:48:25.823376 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-09 07:48:25.823395 | orchestrator | Thursday 09 April 2026 07:48:01 +0000 (0:00:00.786) 0:00:24.374 ******** 2026-04-09 07:48:25.823412 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:48:25.823429 | orchestrator | 2026-04-09 07:48:25.823440 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-09 07:48:25.823457 | orchestrator | Thursday 09 April 2026 07:48:02 +0000 (0:00:01.300) 0:00:25.674 ******** 2026-04-09 07:48:25.823473 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:48:25.823488 | orchestrator | 2026-04-09 07:48:25.823529 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-04-09 07:48:25.823546 | orchestrator | Thursday 09 April 2026 07:48:03 +0000 (0:00:01.286) 0:00:26.961 ******** 2026-04-09 07:48:25.823562 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:48:25.823578 | orchestrator | 2026-04-09 07:48:25.823594 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-04-09 07:48:25.823611 | orchestrator | Thursday 09 April 2026 07:48:04 +0000 (0:00:01.187) 0:00:28.149 ******** 2026-04-09 07:48:25.823628 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:48:25.823643 | orchestrator | 2026-04-09 07:48:25.823659 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-04-09 07:48:25.823675 | orchestrator | Thursday 09 April 2026 07:48:08 +0000 (0:00:03.181) 0:00:31.330 ******** 2026-04-09 07:48:25.823704 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:48:25.823720 | orchestrator | 2026-04-09 07:48:25.823734 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-04-09 07:48:25.823749 | orchestrator | Thursday 09 April 2026 07:48:09 +0000 (0:00:01.432) 0:00:32.763 ******** 2026-04-09 07:48:25.823766 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:48:25.823782 | orchestrator | 2026-04-09 07:48:25.823798 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-04-09 07:48:25.823814 | orchestrator | Thursday 09 April 2026 07:48:10 +0000 (0:00:01.392) 0:00:34.155 ******** 2026-04-09 07:48:25.823829 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:48:25.823846 | orchestrator | 2026-04-09 07:48:25.823863 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-04-09 07:48:25.823879 | orchestrator | Thursday 09 April 2026 07:48:12 +0000 (0:00:01.151) 0:00:35.306 ******** 2026-04-09 07:48:25.823895 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:48:25.823911 | orchestrator | 2026-04-09 07:48:25.823927 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-09 07:48:25.823943 | orchestrator | Thursday 09 April 2026 07:48:13 +0000 (0:00:01.117) 0:00:36.424 ******** 2026-04-09 07:48:25.823959 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 07:48:25.823974 | orchestrator | 2026-04-09 07:48:25.823984 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-09 07:48:25.823994 | orchestrator | Thursday 09 April 2026 07:48:14 +0000 (0:00:01.521) 0:00:37.945 ******** 2026-04-09 07:48:25.824003 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:48:25.824012 | orchestrator | 2026-04-09 07:48:25.824022 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-09 07:48:25.824031 | orchestrator | Thursday 09 April 2026 07:48:16 +0000 (0:00:01.480) 0:00:39.426 ******** 2026-04-09 07:48:25.824040 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 07:48:25.824050 | orchestrator | 2026-04-09 07:48:25.824059 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-09 07:48:25.824068 | orchestrator | Thursday 09 April 2026 07:48:18 +0000 (0:00:02.416) 0:00:41.842 ******** 2026-04-09 07:48:25.824077 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 07:48:25.824087 | orchestrator | 2026-04-09 07:48:25.824096 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-09 07:48:25.824105 | orchestrator | Thursday 09 April 2026 07:48:19 +0000 (0:00:01.275) 0:00:43.118 ******** 2026-04-09 07:48:25.824115 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 07:48:25.824124 | orchestrator | 2026-04-09 07:48:25.824140 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 07:48:25.824156 | orchestrator | Thursday 09 April 2026 07:48:21 +0000 (0:00:01.309) 0:00:44.427 ******** 2026-04-09 07:48:25.824172 | orchestrator | 2026-04-09 07:48:25.824189 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 07:48:25.824204 | orchestrator | Thursday 09 April 2026 07:48:21 +0000 (0:00:00.446) 0:00:44.873 ******** 2026-04-09 07:48:25.824220 | orchestrator | 2026-04-09 07:48:25.824235 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 07:48:25.824252 | orchestrator | Thursday 09 April 2026 07:48:22 +0000 (0:00:00.445) 0:00:45.319 ******** 2026-04-09 07:48:25.824268 | orchestrator | 2026-04-09 07:48:25.824285 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-09 07:48:25.824295 | orchestrator | Thursday 09 April 2026 07:48:22 +0000 (0:00:00.817) 0:00:46.136 ******** 2026-04-09 07:48:25.824304 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 07:48:25.824313 | orchestrator | 2026-04-09 07:48:25.824323 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-09 07:48:25.824354 | orchestrator | Thursday 09 April 2026 07:48:25 +0000 (0:00:02.512) 0:00:48.649 ******** 2026-04-09 07:48:25.824370 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-09 07:48:25.824387 | orchestrator |  "msg": [ 2026-04-09 07:48:25.824397 | orchestrator |  "Validator run completed.", 2026-04-09 07:48:25.824407 | orchestrator |  "You can find the report file here:", 2026-04-09 07:48:25.824417 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-04-09T07:47:39+00:00-report.json", 2026-04-09 07:48:25.824427 | orchestrator |  "on the following host:", 2026-04-09 07:48:25.824437 | orchestrator |  "testbed-manager" 2026-04-09 07:48:25.824447 | orchestrator |  ] 2026-04-09 07:48:25.824456 | orchestrator | } 2026-04-09 07:48:25.824466 | orchestrator | 2026-04-09 07:48:25.824476 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 07:48:25.824487 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-09 07:48:25.824498 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 07:48:25.824518 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 07:48:27.699188 | orchestrator | 2026-04-09 07:48:27.699288 | orchestrator | 2026-04-09 07:48:27.699306 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 07:48:27.699322 | orchestrator | Thursday 09 April 2026 07:48:27 +0000 (0:00:01.792) 0:00:50.442 ******** 2026-04-09 07:48:27.699383 | orchestrator | =============================================================================== 2026-04-09 07:48:27.699397 | orchestrator | Gather list of mgr modules ---------------------------------------------- 3.18s 2026-04-09 07:48:27.699410 | orchestrator | Get timestamp for report file ------------------------------------------- 2.77s 2026-04-09 07:48:27.699422 | orchestrator | Get container info ------------------------------------------------------ 2.65s 2026-04-09 07:48:27.699435 | orchestrator | Write report file ------------------------------------------------------- 2.51s 2026-04-09 07:48:27.699447 | orchestrator | Aggregate test results step one ----------------------------------------- 2.42s 2026-04-09 07:48:27.699460 | orchestrator | Flush handlers ---------------------------------------------------------- 1.96s 2026-04-09 07:48:27.699472 | orchestrator | Print report file information ------------------------------------------- 1.79s 2026-04-09 07:48:27.699486 | orchestrator | Create report output directory ------------------------------------------ 1.76s 2026-04-09 07:48:27.699498 | orchestrator | Prepare test data for container existance test -------------------------- 1.73s 2026-04-09 07:48:27.699511 | orchestrator | Flush handlers ---------------------------------------------------------- 1.71s 2026-04-09 07:48:27.699524 | orchestrator | Set validation result to passed if no test failed ----------------------- 1.52s 2026-04-09 07:48:27.699536 | orchestrator | Set validation result to failed if a test failed ------------------------ 1.48s 2026-04-09 07:48:27.699548 | orchestrator | Parse mgr module list from json ----------------------------------------- 1.43s 2026-04-09 07:48:27.699561 | orchestrator | Set test result to passed if container is existing ---------------------- 1.41s 2026-04-09 07:48:27.699573 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 1.39s 2026-04-09 07:48:27.699586 | orchestrator | Set test result to failed if container is missing ----------------------- 1.35s 2026-04-09 07:48:27.699598 | orchestrator | Prepare test data ------------------------------------------------------- 1.34s 2026-04-09 07:48:27.699611 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 1.33s 2026-04-09 07:48:27.699624 | orchestrator | Aggregate test results step three --------------------------------------- 1.31s 2026-04-09 07:48:27.699637 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 1.31s 2026-04-09 07:48:27.928691 | orchestrator | + osism validate ceph-osds 2026-04-09 07:48:49.958406 | orchestrator | 2026-04-09 07:48:49.958523 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-04-09 07:48:49.958579 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-09 07:48:49.958593 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-09 07:48:49.958616 | orchestrator | 2026-04-09 07:48:49.958627 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-09 07:48:49.958638 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-09 07:48:49.958649 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-09 07:48:49.958670 | orchestrator | Thursday 09 April 2026 07:48:44 +0000 (0:00:01.509) 0:00:01.509 ******** 2026-04-09 07:48:49.958681 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 07:48:49.958692 | orchestrator | 2026-04-09 07:48:49.958703 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 07:48:49.958714 | orchestrator | Thursday 09 April 2026 07:48:46 +0000 (0:00:01.643) 0:00:03.152 ******** 2026-04-09 07:48:49.958724 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 07:48:49.958735 | orchestrator | 2026-04-09 07:48:49.958746 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-09 07:48:49.958756 | orchestrator | Thursday 09 April 2026 07:48:46 +0000 (0:00:00.290) 0:00:03.443 ******** 2026-04-09 07:48:49.958767 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 07:48:49.958778 | orchestrator | 2026-04-09 07:48:49.958789 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-09 07:48:49.958800 | orchestrator | Thursday 09 April 2026 07:48:47 +0000 (0:00:00.756) 0:00:04.200 ******** 2026-04-09 07:48:49.958810 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:48:49.958822 | orchestrator | 2026-04-09 07:48:49.958833 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-09 07:48:49.958844 | orchestrator | Thursday 09 April 2026 07:48:47 +0000 (0:00:00.135) 0:00:04.335 ******** 2026-04-09 07:48:49.958855 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:48:49.958865 | orchestrator | 2026-04-09 07:48:49.958876 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-09 07:48:49.958892 | orchestrator | Thursday 09 April 2026 07:48:47 +0000 (0:00:00.149) 0:00:04.485 ******** 2026-04-09 07:48:49.958906 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:48:49.958920 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:48:49.958932 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:48:49.958946 | orchestrator | 2026-04-09 07:48:49.958959 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-09 07:48:49.958971 | orchestrator | Thursday 09 April 2026 07:48:48 +0000 (0:00:00.766) 0:00:05.251 ******** 2026-04-09 07:48:49.958984 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:48:49.958997 | orchestrator | 2026-04-09 07:48:49.959010 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-09 07:48:49.959023 | orchestrator | Thursday 09 April 2026 07:48:48 +0000 (0:00:00.191) 0:00:05.443 ******** 2026-04-09 07:48:49.959035 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:48:49.959048 | orchestrator | ok: [testbed-node-4] 2026-04-09 07:48:49.959061 | orchestrator | ok: [testbed-node-5] 2026-04-09 07:48:49.959074 | orchestrator | 2026-04-09 07:48:49.959087 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-04-09 07:48:49.959100 | orchestrator | Thursday 09 April 2026 07:48:48 +0000 (0:00:00.364) 0:00:05.807 ******** 2026-04-09 07:48:49.959112 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:48:49.959125 | orchestrator | 2026-04-09 07:48:49.959138 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 07:48:49.959151 | orchestrator | Thursday 09 April 2026 07:48:49 +0000 (0:00:00.389) 0:00:06.197 ******** 2026-04-09 07:48:49.959164 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:48:49.959185 | orchestrator | ok: [testbed-node-4] 2026-04-09 07:48:49.959198 | orchestrator | ok: [testbed-node-5] 2026-04-09 07:48:49.959210 | orchestrator | 2026-04-09 07:48:49.959222 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-04-09 07:48:49.959235 | orchestrator | Thursday 09 April 2026 07:48:49 +0000 (0:00:00.323) 0:00:06.521 ******** 2026-04-09 07:48:49.959251 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8e2a9d389fc6d270376bfed7584d2423e2f820171a36d14d7582e12bad111acf', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 6 minutes'})  2026-04-09 07:48:49.959268 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1fc7fb333c7582ff8c3dce07717a77bd5e591195b367c28207ad43f299682475', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-09 07:48:49.959281 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7e98dcd23ae38666c9546b846229929961311783a934b70a2eb2531b09ca5bdf', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-09 07:48:49.959311 | orchestrator | skipping: [testbed-node-3] => (item={'id': '32131664585ee078d2c3843c624d63c8b9ad93eb9b094d55942153e1745a6c86', 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 24 minutes'})  2026-04-09 07:48:49.959345 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e231cad5601713504d3e223a012497e2b3bc03a034938058776cc8f520ec8584', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-04-09 07:48:49.959357 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6ba822689e622fbb04aa5a6fbbeeea06d316170953bef0599936da25195929bb', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 58 minutes (healthy)'})  2026-04-09 07:48:49.959369 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a7db9cbe7c19fe5ebe595b7de260ae2580c6268038da4bab8375d14664cf5679', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 58 minutes (healthy)'})  2026-04-09 07:48:49.959380 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8ef0c788577c12e7c77d2e88b809f22b76520b374d129237372858c97e0f9941', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-09 07:48:49.959403 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1f1bae940b2669ab6e19d79c30afecc19bdaebe555620d0f2122d580a6554f66', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-04-09 07:48:49.959419 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7eeb1ba3b131611460c405042aed02eb343e43e0a86a1073dfa6bb15761095a3', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-09 07:48:49.959431 | orchestrator | skipping: [testbed-node-3] => (item={'id': '25886fc6359de829efd8b061c232d6dd56c02f86af93d6393656f8ada46bb784', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-09 07:48:49.959443 | orchestrator | ok: [testbed-node-3] => (item={'id': '810a5d2182f4daf18b6ee3eadaa4faab5c0491d4dfff010620a01497f5651d78', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-09 07:48:49.959462 | orchestrator | ok: [testbed-node-3] => (item={'id': '2d7a1103ebd1a8c0cbcbb16729e0db60808288600cc55e0548f273a6065c52d6', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-09 07:48:49.959473 | orchestrator | skipping: [testbed-node-3] => (item={'id': '05ad050427fa87c0a9fcd8c5ef74c5169dc2c0ee4d48be706427d583366aad8e', 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-09 07:48:49.959484 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd048c23d76baac76736bf1e11df4f64d20f6fe60e84670344783faee093bb512', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 3 hours (healthy)'})  2026-04-09 07:48:49.959496 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c63cc3fede915876746fdc93199b4cf06848b3e43235e0f9f4772319f8288e11', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 3 hours (healthy)'})  2026-04-09 07:48:49.959507 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'aa1582f65d38ced606aa21625cb8241dc9e045c420975cb0eac1a772c2fab215', 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'name': '/cron', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-09 07:48:49.959518 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ffe6f9f9462a8805a3bb826a44cc0b78afeac11dca9ad84e88f4f880a752e6be', 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-09 07:48:49.959536 | orchestrator | skipping: [testbed-node-3] => (item={'id': '04a00c2e7138cec99bb4c40e4ff1583b29660f7bdc38766994afe6223ee6bbcb', 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'name': '/fluentd', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-09 07:48:50.145091 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd28d0a91f8cc1ffb0ba3fe61dd1114510b82a0601d2047ca01e0ed332dd87c59', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 6 minutes'})  2026-04-09 07:48:50.145181 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ba4352682ad82e95ecc727149899f4f7306ee15de8ce8cb21eece3b2715305a9', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-09 07:48:50.145195 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8b428a4c285903ea9be66b60acc0fa11ce1de79120a53230a365389b36f7ffee', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-09 07:48:50.145207 | orchestrator | skipping: [testbed-node-4] => (item={'id': '993f9b946a46d8a2aa2f3813d4c1f45521c576065c2b997de042d6b52a1f6bc9', 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 24 minutes'})  2026-04-09 07:48:50.145218 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd5e2dbab9637adcdbe1dddbd8de828be5d14fbf5df9dc7bf04b080558f679e42', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-04-09 07:48:50.145244 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8708952bbc3458122606c5d3d607677605839ec10059e867b92d0de686915bdc', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 57 minutes (healthy)'})  2026-04-09 07:48:50.145275 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e92300fd4f75e91b493fe9d4f36e554c38392b7dac8f054e078d4ae3df5f2069', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 58 minutes (healthy)'})  2026-04-09 07:48:50.145288 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f6fd66394ba24c76dbf5bf25c0a83a73d5ca9c267b9aeca8ba9661673064b931', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-09 07:48:50.145301 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4d821a1a75691ffb07b76de55abb5bed7fee9e8e62378acfb77c3de170f48953', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-04-09 07:48:50.145313 | orchestrator | skipping: [testbed-node-4] => (item={'id': '68a0de67dc6cc2338f9b7e5852197b192b502dd1e9cbb4b008566d7a3ab3e8fa', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-09 07:48:50.145356 | orchestrator | skipping: [testbed-node-4] => (item={'id': '20ebaeda56915015e2e792427ed3e7b0771c2b6e256fa18f9a59a060ec34981d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-09 07:48:50.145368 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b91d82107dae84427562c69a65b15f739deeed4f436a7d447a3d6e55eda418bb', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 6 minutes'})  2026-04-09 07:48:50.145379 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8645b21e49073e17d3da8c2eb04558ea8d31ecebf9f81b1e43c982b919734a97', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-09 07:48:50.145391 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a104c8237100ccdf87416b25aa69c34d534d605ac01101a975c2e26a4fd081c8', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-09 07:48:50.145417 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd68da83d247c46658bd8a1e4134ed1d140bc83c39c79bfb2842389cc2ff0d29b', 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 24 minutes'})  2026-04-09 07:48:50.145429 | orchestrator | skipping: [testbed-node-5] => (item={'id': '69e5e22606ed647b34aed160f871da03c63f67cb44af33f419861c39f2b298a5', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-04-09 07:48:50.145440 | orchestrator | skipping: [testbed-node-5] => (item={'id': '30e6b65b9f29c35adc0c056b09287fa0d62a2857288152d7e54b3a572ee7ace9', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 57 minutes (healthy)'})  2026-04-09 07:48:50.145451 | orchestrator | skipping: [testbed-node-5] => (item={'id': '454d4458da076f282562c273ab20af7c67f033b6c2c74fe2a8f94a53cd84a2f6', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 58 minutes (healthy)'})  2026-04-09 07:48:50.145462 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6d00803ead58d24df542182e796a3b24033f3a5461a83870d16ae7a3fbacffef', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-09 07:48:50.145486 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4a0d56e14931d611c6bfa66337307e95458041b96cea228e51eadb10fb392b62', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-04-09 07:48:50.145498 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f15dc19bf434108731408aec385df40e28054d1eff7360ee5f7a7e87dda711b9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-09 07:48:50.145509 | orchestrator | skipping: [testbed-node-5] => (item={'id': '41ce37f2522bf40a2a3c4a41384200400b60284c203618bcad4b1e8d87f390e3', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-09 07:48:50.145521 | orchestrator | ok: [testbed-node-5] => (item={'id': '6af14f97468ea1713fd03e88fc01521c85fffacdeb17959998e9246cc9b9cd3b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-09 07:48:50.145533 | orchestrator | ok: [testbed-node-5] => (item={'id': 'fbfe75e24249836b478ab3abda964486c74e9895eb0f1769fcf55b02faef7c68', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-09 07:48:50.145545 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5bc8b90510db03d5a4dbda05a6b75c44f66ecaf6749dfa6b2163e9e02c0b15f2', 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-09 07:48:50.145556 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b99c17087517a493fa834dc868bad5e5386e1decd095636c8f6aeae560dbd29a', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 3 hours (healthy)'})  2026-04-09 07:48:50.145567 | orchestrator | skipping: [testbed-node-5] => (item={'id': '76aa81a8fd8e47881b9858e9e38851cbf11d58a8d7043af3150fa7dc5d590a15', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 3 hours (healthy)'})  2026-04-09 07:48:50.145578 | orchestrator | skipping: [testbed-node-5] => (item={'id': '43d208d655c94b226da49e49cff14a9c903213d0a981c3def12ed53f09723c78', 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'name': '/cron', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-09 07:48:50.145596 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6da2d2abe68ab1297900d5cb6309628256a98a2696a53502a461bb60f0fdcf8c', 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-09 07:48:59.347718 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ae5666abddd766fd67ce2c24ebb6b9402152bcdd2936e5a4239b4c9cb6492beb', 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'name': '/fluentd', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-09 07:48:59.347831 | orchestrator | ok: [testbed-node-4] => (item={'id': '1c0d81b0b508a4040fdc957f68607d6a7e7c608316d4b92c2ad9c9c97cfb9d8b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-09 07:48:59.347848 | orchestrator | ok: [testbed-node-4] => (item={'id': '2fc0056c4359b1804e51413b8dd849167b7ff76feb59ce9a77e944bb9b709e71', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-09 07:48:59.347862 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c906a4afb2bd38e42535bec4d3fd729be7e42cfda5466725cc4cb92d10d1035d', 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-09 07:48:59.347897 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7c5cd4dd2e64d69dfd3501a28c30ad5972ac5cc3de087474ad9b5a3e6f51b240', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 3 hours (healthy)'})  2026-04-09 07:48:59.347911 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1f9c9958f5633ef9e5a2f53ab487dce7e170f285f4288d9258a004db4aa5f01b', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 3 hours (healthy)'})  2026-04-09 07:48:59.347923 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'eabf479bb7f36f135c9a4472e5d5bad50d27aaa6378e25c23624384466d9922d', 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'name': '/cron', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-09 07:48:59.347934 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cef60e333e8c80294c21110882254e6f5ff6d4ea6a5783907e4af5011e881698', 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-09 07:48:59.347946 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'afc228f73cfbe5fd33afb5b80b9bc9beb594f1e71f7b3744b013a69a1c473149', 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'name': '/fluentd', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-09 07:48:59.347958 | orchestrator | 2026-04-09 07:48:59.347972 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-04-09 07:48:59.347984 | orchestrator | Thursday 09 April 2026 07:48:50 +0000 (0:00:00.795) 0:00:07.316 ******** 2026-04-09 07:48:59.347995 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:48:59.348007 | orchestrator | ok: [testbed-node-4] 2026-04-09 07:48:59.348017 | orchestrator | ok: [testbed-node-5] 2026-04-09 07:48:59.348028 | orchestrator | 2026-04-09 07:48:59.348039 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-04-09 07:48:59.348050 | orchestrator | Thursday 09 April 2026 07:48:50 +0000 (0:00:00.328) 0:00:07.645 ******** 2026-04-09 07:48:59.348061 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:48:59.348073 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:48:59.348084 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:48:59.348095 | orchestrator | 2026-04-09 07:48:59.348105 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-04-09 07:48:59.348134 | orchestrator | Thursday 09 April 2026 07:48:50 +0000 (0:00:00.318) 0:00:07.963 ******** 2026-04-09 07:48:59.348146 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:48:59.348157 | orchestrator | ok: [testbed-node-4] 2026-04-09 07:48:59.348167 | orchestrator | ok: [testbed-node-5] 2026-04-09 07:48:59.348178 | orchestrator | 2026-04-09 07:48:59.348189 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 07:48:59.348200 | orchestrator | Thursday 09 April 2026 07:48:51 +0000 (0:00:00.478) 0:00:08.442 ******** 2026-04-09 07:48:59.348211 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:48:59.348221 | orchestrator | ok: [testbed-node-4] 2026-04-09 07:48:59.348232 | orchestrator | ok: [testbed-node-5] 2026-04-09 07:48:59.348243 | orchestrator | 2026-04-09 07:48:59.348254 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-04-09 07:48:59.348268 | orchestrator | Thursday 09 April 2026 07:48:51 +0000 (0:00:00.320) 0:00:08.762 ******** 2026-04-09 07:48:59.348281 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-04-09 07:48:59.348295 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-04-09 07:48:59.348307 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:48:59.348354 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-04-09 07:48:59.348399 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-04-09 07:48:59.348413 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:48:59.348426 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-04-09 07:48:59.348439 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-04-09 07:48:59.348451 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:48:59.348464 | orchestrator | 2026-04-09 07:48:59.348477 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-04-09 07:48:59.348488 | orchestrator | Thursday 09 April 2026 07:48:52 +0000 (0:00:00.350) 0:00:09.113 ******** 2026-04-09 07:48:59.348499 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:48:59.348510 | orchestrator | ok: [testbed-node-4] 2026-04-09 07:48:59.348521 | orchestrator | ok: [testbed-node-5] 2026-04-09 07:48:59.348532 | orchestrator | 2026-04-09 07:48:59.348543 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-09 07:48:59.348554 | orchestrator | Thursday 09 April 2026 07:48:52 +0000 (0:00:00.348) 0:00:09.462 ******** 2026-04-09 07:48:59.348565 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:48:59.348576 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:48:59.348587 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:48:59.348597 | orchestrator | 2026-04-09 07:48:59.348608 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-09 07:48:59.348619 | orchestrator | Thursday 09 April 2026 07:48:52 +0000 (0:00:00.476) 0:00:09.938 ******** 2026-04-09 07:48:59.348630 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:48:59.348641 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:48:59.348652 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:48:59.348663 | orchestrator | 2026-04-09 07:48:59.348674 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-04-09 07:48:59.348685 | orchestrator | Thursday 09 April 2026 07:48:53 +0000 (0:00:00.310) 0:00:10.249 ******** 2026-04-09 07:48:59.348696 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:48:59.348707 | orchestrator | ok: [testbed-node-4] 2026-04-09 07:48:59.348717 | orchestrator | ok: [testbed-node-5] 2026-04-09 07:48:59.348728 | orchestrator | 2026-04-09 07:48:59.348739 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-09 07:48:59.348750 | orchestrator | Thursday 09 April 2026 07:48:53 +0000 (0:00:00.296) 0:00:10.546 ******** 2026-04-09 07:48:59.348761 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:48:59.348772 | orchestrator | 2026-04-09 07:48:59.348788 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-09 07:48:59.348799 | orchestrator | Thursday 09 April 2026 07:48:53 +0000 (0:00:00.273) 0:00:10.819 ******** 2026-04-09 07:48:59.348810 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:48:59.348821 | orchestrator | 2026-04-09 07:48:59.348832 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-09 07:48:59.348843 | orchestrator | Thursday 09 April 2026 07:48:54 +0000 (0:00:00.273) 0:00:11.093 ******** 2026-04-09 07:48:59.348854 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:48:59.348864 | orchestrator | 2026-04-09 07:48:59.348875 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 07:48:59.348886 | orchestrator | Thursday 09 April 2026 07:48:54 +0000 (0:00:00.522) 0:00:11.615 ******** 2026-04-09 07:48:59.348897 | orchestrator | 2026-04-09 07:48:59.348908 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 07:48:59.348919 | orchestrator | Thursday 09 April 2026 07:48:54 +0000 (0:00:00.233) 0:00:11.849 ******** 2026-04-09 07:48:59.348930 | orchestrator | 2026-04-09 07:48:59.348941 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 07:48:59.348952 | orchestrator | Thursday 09 April 2026 07:48:54 +0000 (0:00:00.071) 0:00:11.920 ******** 2026-04-09 07:48:59.348963 | orchestrator | 2026-04-09 07:48:59.348974 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-09 07:48:59.348991 | orchestrator | Thursday 09 April 2026 07:48:54 +0000 (0:00:00.076) 0:00:11.997 ******** 2026-04-09 07:48:59.349002 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:48:59.349013 | orchestrator | 2026-04-09 07:48:59.349024 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-04-09 07:48:59.349035 | orchestrator | Thursday 09 April 2026 07:48:55 +0000 (0:00:00.294) 0:00:12.291 ******** 2026-04-09 07:48:59.349046 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:48:59.349057 | orchestrator | 2026-04-09 07:48:59.349068 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 07:48:59.349079 | orchestrator | Thursday 09 April 2026 07:48:55 +0000 (0:00:00.268) 0:00:12.560 ******** 2026-04-09 07:48:59.349090 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:48:59.349101 | orchestrator | ok: [testbed-node-4] 2026-04-09 07:48:59.349112 | orchestrator | ok: [testbed-node-5] 2026-04-09 07:48:59.349123 | orchestrator | 2026-04-09 07:48:59.349134 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-04-09 07:48:59.349145 | orchestrator | Thursday 09 April 2026 07:48:55 +0000 (0:00:00.329) 0:00:12.889 ******** 2026-04-09 07:48:59.349156 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:48:59.349166 | orchestrator | 2026-04-09 07:48:59.349177 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-04-09 07:48:59.349188 | orchestrator | Thursday 09 April 2026 07:48:56 +0000 (0:00:00.247) 0:00:13.136 ******** 2026-04-09 07:48:59.349199 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 07:48:59.349210 | orchestrator | 2026-04-09 07:48:59.349221 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-04-09 07:48:59.349232 | orchestrator | Thursday 09 April 2026 07:48:58 +0000 (0:00:02.576) 0:00:15.713 ******** 2026-04-09 07:48:59.349243 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:48:59.349253 | orchestrator | 2026-04-09 07:48:59.349264 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-04-09 07:48:59.349275 | orchestrator | Thursday 09 April 2026 07:48:59 +0000 (0:00:00.331) 0:00:16.044 ******** 2026-04-09 07:48:59.349286 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:48:59.349296 | orchestrator | 2026-04-09 07:48:59.349308 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-04-09 07:48:59.349367 | orchestrator | Thursday 09 April 2026 07:48:59 +0000 (0:00:00.315) 0:00:16.360 ******** 2026-04-09 07:49:14.096806 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:49:14.096924 | orchestrator | 2026-04-09 07:49:14.096942 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-04-09 07:49:14.096956 | orchestrator | Thursday 09 April 2026 07:48:59 +0000 (0:00:00.131) 0:00:16.491 ******** 2026-04-09 07:49:14.096967 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:49:14.096979 | orchestrator | 2026-04-09 07:49:14.096991 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 07:49:14.097002 | orchestrator | Thursday 09 April 2026 07:48:59 +0000 (0:00:00.139) 0:00:16.631 ******** 2026-04-09 07:49:14.097013 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:49:14.097023 | orchestrator | ok: [testbed-node-4] 2026-04-09 07:49:14.097034 | orchestrator | ok: [testbed-node-5] 2026-04-09 07:49:14.097045 | orchestrator | 2026-04-09 07:49:14.097056 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-04-09 07:49:14.097067 | orchestrator | Thursday 09 April 2026 07:48:59 +0000 (0:00:00.323) 0:00:16.955 ******** 2026-04-09 07:49:14.097078 | orchestrator | changed: [testbed-node-3] 2026-04-09 07:49:14.097089 | orchestrator | changed: [testbed-node-4] 2026-04-09 07:49:14.097100 | orchestrator | changed: [testbed-node-5] 2026-04-09 07:49:14.097111 | orchestrator | 2026-04-09 07:49:14.097122 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-04-09 07:49:14.097133 | orchestrator | Thursday 09 April 2026 07:49:02 +0000 (0:00:02.746) 0:00:19.701 ******** 2026-04-09 07:49:14.097144 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:49:14.097178 | orchestrator | ok: [testbed-node-4] 2026-04-09 07:49:14.097189 | orchestrator | ok: [testbed-node-5] 2026-04-09 07:49:14.097201 | orchestrator | 2026-04-09 07:49:14.097212 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-04-09 07:49:14.097223 | orchestrator | Thursday 09 April 2026 07:49:03 +0000 (0:00:00.524) 0:00:20.225 ******** 2026-04-09 07:49:14.097234 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:49:14.097245 | orchestrator | ok: [testbed-node-4] 2026-04-09 07:49:14.097256 | orchestrator | ok: [testbed-node-5] 2026-04-09 07:49:14.097266 | orchestrator | 2026-04-09 07:49:14.097277 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-04-09 07:49:14.097288 | orchestrator | Thursday 09 April 2026 07:49:03 +0000 (0:00:00.553) 0:00:20.779 ******** 2026-04-09 07:49:14.097299 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:49:14.097340 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:49:14.097353 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:49:14.097366 | orchestrator | 2026-04-09 07:49:14.097394 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-04-09 07:49:14.097407 | orchestrator | Thursday 09 April 2026 07:49:04 +0000 (0:00:00.315) 0:00:21.095 ******** 2026-04-09 07:49:14.097419 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:49:14.097432 | orchestrator | ok: [testbed-node-4] 2026-04-09 07:49:14.097445 | orchestrator | ok: [testbed-node-5] 2026-04-09 07:49:14.097458 | orchestrator | 2026-04-09 07:49:14.097471 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-04-09 07:49:14.097483 | orchestrator | Thursday 09 April 2026 07:49:04 +0000 (0:00:00.364) 0:00:21.460 ******** 2026-04-09 07:49:14.097496 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:49:14.097510 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:49:14.097522 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:49:14.097535 | orchestrator | 2026-04-09 07:49:14.097548 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-04-09 07:49:14.097561 | orchestrator | Thursday 09 April 2026 07:49:04 +0000 (0:00:00.520) 0:00:21.980 ******** 2026-04-09 07:49:14.097574 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:49:14.097587 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:49:14.097599 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:49:14.097612 | orchestrator | 2026-04-09 07:49:14.097625 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 07:49:14.097638 | orchestrator | Thursday 09 April 2026 07:49:05 +0000 (0:00:00.348) 0:00:22.329 ******** 2026-04-09 07:49:14.097651 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:49:14.097664 | orchestrator | ok: [testbed-node-4] 2026-04-09 07:49:14.097676 | orchestrator | ok: [testbed-node-5] 2026-04-09 07:49:14.097689 | orchestrator | 2026-04-09 07:49:14.097702 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-04-09 07:49:14.097716 | orchestrator | Thursday 09 April 2026 07:49:05 +0000 (0:00:00.541) 0:00:22.871 ******** 2026-04-09 07:49:14.097728 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:49:14.097742 | orchestrator | ok: [testbed-node-4] 2026-04-09 07:49:14.097755 | orchestrator | ok: [testbed-node-5] 2026-04-09 07:49:14.097765 | orchestrator | 2026-04-09 07:49:14.097776 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-04-09 07:49:14.097787 | orchestrator | Thursday 09 April 2026 07:49:06 +0000 (0:00:00.738) 0:00:23.610 ******** 2026-04-09 07:49:14.097798 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:49:14.097809 | orchestrator | ok: [testbed-node-4] 2026-04-09 07:49:14.097820 | orchestrator | ok: [testbed-node-5] 2026-04-09 07:49:14.097830 | orchestrator | 2026-04-09 07:49:14.097841 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-04-09 07:49:14.097852 | orchestrator | Thursday 09 April 2026 07:49:06 +0000 (0:00:00.347) 0:00:23.957 ******** 2026-04-09 07:49:14.097863 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:49:14.097874 | orchestrator | skipping: [testbed-node-4] 2026-04-09 07:49:14.097884 | orchestrator | skipping: [testbed-node-5] 2026-04-09 07:49:14.097904 | orchestrator | 2026-04-09 07:49:14.097915 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-04-09 07:49:14.097926 | orchestrator | Thursday 09 April 2026 07:49:07 +0000 (0:00:00.327) 0:00:24.284 ******** 2026-04-09 07:49:14.097937 | orchestrator | ok: [testbed-node-3] 2026-04-09 07:49:14.097947 | orchestrator | ok: [testbed-node-4] 2026-04-09 07:49:14.097958 | orchestrator | ok: [testbed-node-5] 2026-04-09 07:49:14.097969 | orchestrator | 2026-04-09 07:49:14.097979 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-09 07:49:14.097990 | orchestrator | Thursday 09 April 2026 07:49:07 +0000 (0:00:00.340) 0:00:24.625 ******** 2026-04-09 07:49:14.098001 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 07:49:14.098012 | orchestrator | 2026-04-09 07:49:14.098091 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-09 07:49:14.098103 | orchestrator | Thursday 09 April 2026 07:49:08 +0000 (0:00:00.783) 0:00:25.408 ******** 2026-04-09 07:49:14.098132 | orchestrator | skipping: [testbed-node-3] 2026-04-09 07:49:14.098143 | orchestrator | 2026-04-09 07:49:14.098154 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-09 07:49:14.098165 | orchestrator | Thursday 09 April 2026 07:49:08 +0000 (0:00:00.274) 0:00:25.682 ******** 2026-04-09 07:49:14.098177 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 07:49:14.098187 | orchestrator | 2026-04-09 07:49:14.098198 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-09 07:49:14.098209 | orchestrator | Thursday 09 April 2026 07:49:10 +0000 (0:00:01.824) 0:00:27.508 ******** 2026-04-09 07:49:14.098220 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 07:49:14.098231 | orchestrator | 2026-04-09 07:49:14.098241 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-09 07:49:14.098252 | orchestrator | Thursday 09 April 2026 07:49:10 +0000 (0:00:00.292) 0:00:27.800 ******** 2026-04-09 07:49:14.098263 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 07:49:14.098274 | orchestrator | 2026-04-09 07:49:14.098285 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 07:49:14.098295 | orchestrator | Thursday 09 April 2026 07:49:11 +0000 (0:00:00.295) 0:00:28.096 ******** 2026-04-09 07:49:14.098332 | orchestrator | 2026-04-09 07:49:14.098349 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 07:49:14.098366 | orchestrator | Thursday 09 April 2026 07:49:11 +0000 (0:00:00.086) 0:00:28.182 ******** 2026-04-09 07:49:14.098383 | orchestrator | 2026-04-09 07:49:14.098401 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 07:49:14.098418 | orchestrator | Thursday 09 April 2026 07:49:11 +0000 (0:00:00.076) 0:00:28.259 ******** 2026-04-09 07:49:14.098436 | orchestrator | 2026-04-09 07:49:14.098456 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-09 07:49:14.098474 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-09 07:49:14.098494 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-09 07:49:14.098523 | orchestrator | Thursday 09 April 2026 07:49:11 +0000 (0:00:00.082) 0:00:28.341 ******** 2026-04-09 07:49:14.098535 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 07:49:14.098546 | orchestrator | 2026-04-09 07:49:14.098557 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-09 07:49:14.098567 | orchestrator | Thursday 09 April 2026 07:49:12 +0000 (0:00:01.358) 0:00:29.700 ******** 2026-04-09 07:49:14.098578 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-04-09 07:49:14.098589 | orchestrator |  "msg": [ 2026-04-09 07:49:14.098601 | orchestrator |  "Validator run completed.", 2026-04-09 07:49:14.098612 | orchestrator |  "You can find the report file here:", 2026-04-09 07:49:14.098633 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-04-09T07:48:45+00:00-report.json", 2026-04-09 07:49:14.098645 | orchestrator |  "on the following host:", 2026-04-09 07:49:14.098656 | orchestrator |  "testbed-manager" 2026-04-09 07:49:14.098667 | orchestrator |  ] 2026-04-09 07:49:14.098679 | orchestrator | } 2026-04-09 07:49:14.098690 | orchestrator | 2026-04-09 07:49:14.098701 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 07:49:14.098713 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 07:49:14.098725 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-09 07:49:14.098737 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-09 07:49:14.098747 | orchestrator | 2026-04-09 07:49:14.098758 | orchestrator | 2026-04-09 07:49:14.098769 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 07:49:14.098780 | orchestrator | Thursday 09 April 2026 07:49:14 +0000 (0:00:01.384) 0:00:31.084 ******** 2026-04-09 07:49:14.098791 | orchestrator | =============================================================================== 2026-04-09 07:49:14.098802 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.75s 2026-04-09 07:49:14.098813 | orchestrator | Get ceph osd tree ------------------------------------------------------- 2.58s 2026-04-09 07:49:14.098824 | orchestrator | Aggregate test results step one ----------------------------------------- 1.83s 2026-04-09 07:49:14.098834 | orchestrator | Get timestamp for report file ------------------------------------------- 1.64s 2026-04-09 07:49:14.098845 | orchestrator | Print report file information ------------------------------------------- 1.38s 2026-04-09 07:49:14.098856 | orchestrator | Write report file ------------------------------------------------------- 1.36s 2026-04-09 07:49:14.098867 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.80s 2026-04-09 07:49:14.098877 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.78s 2026-04-09 07:49:14.098888 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.77s 2026-04-09 07:49:14.098899 | orchestrator | Create report output directory ------------------------------------------ 0.76s 2026-04-09 07:49:14.098910 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.74s 2026-04-09 07:49:14.098921 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.55s 2026-04-09 07:49:14.098940 | orchestrator | Prepare test data ------------------------------------------------------- 0.54s 2026-04-09 07:49:14.348536 | orchestrator | Parse LVM data as JSON -------------------------------------------------- 0.52s 2026-04-09 07:49:14.348636 | orchestrator | Aggregate test results step three --------------------------------------- 0.52s 2026-04-09 07:49:14.348652 | orchestrator | Fail if count of unencrypted OSDs does not match ------------------------ 0.52s 2026-04-09 07:49:14.348663 | orchestrator | Set test result to passed if count matches ------------------------------ 0.48s 2026-04-09 07:49:14.348674 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.48s 2026-04-09 07:49:14.348685 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.39s 2026-04-09 07:49:14.348695 | orchestrator | Flush handlers ---------------------------------------------------------- 0.38s 2026-04-09 07:49:14.549641 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-04-09 07:49:14.556087 | orchestrator | + set -e 2026-04-09 07:49:14.556168 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 07:49:14.556182 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 07:49:14.556190 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 07:49:14.556197 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-09 07:49:14.556204 | orchestrator | ++ CEPH_VERSION=reef 2026-04-09 07:49:14.556560 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 07:49:14.556599 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 07:49:14.556607 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-09 07:49:14.556614 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-09 07:49:14.556621 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-09 07:49:14.556627 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-09 07:49:14.556634 | orchestrator | ++ export ARA=false 2026-04-09 07:49:14.556641 | orchestrator | ++ ARA=false 2026-04-09 07:49:14.556648 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 07:49:14.556654 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 07:49:14.556661 | orchestrator | ++ export TEMPEST=false 2026-04-09 07:49:14.556668 | orchestrator | ++ TEMPEST=false 2026-04-09 07:49:14.556674 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 07:49:14.556681 | orchestrator | ++ IS_ZUUL=true 2026-04-09 07:49:14.556688 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 07:49:14.556695 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2026-04-09 07:49:14.556701 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 07:49:14.556708 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 07:49:14.556714 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 07:49:14.556721 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 07:49:14.556728 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 07:49:14.556734 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 07:49:14.556741 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 07:49:14.556748 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 07:49:14.556755 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-09 07:49:14.556761 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-09 07:49:14.556768 | orchestrator | + source /etc/os-release 2026-04-09 07:49:14.556774 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-04-09 07:49:14.556781 | orchestrator | ++ NAME=Ubuntu 2026-04-09 07:49:14.556787 | orchestrator | ++ VERSION_ID=24.04 2026-04-09 07:49:14.556806 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-04-09 07:49:14.556824 | orchestrator | ++ VERSION_CODENAME=noble 2026-04-09 07:49:14.556831 | orchestrator | ++ ID=ubuntu 2026-04-09 07:49:14.556837 | orchestrator | ++ ID_LIKE=debian 2026-04-09 07:49:14.556844 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-04-09 07:49:14.556851 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-04-09 07:49:14.556857 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-04-09 07:49:14.556864 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-04-09 07:49:14.556871 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-04-09 07:49:14.556878 | orchestrator | ++ LOGO=ubuntu-logo 2026-04-09 07:49:14.556885 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-04-09 07:49:14.556892 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-04-09 07:49:14.556900 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-09 07:49:14.587678 | orchestrator | 2026-04-09 07:49:14.587787 | orchestrator | # Status of Elasticsearch 2026-04-09 07:49:14.587809 | orchestrator | + pushd /opt/configuration/contrib 2026-04-09 07:49:14.587828 | orchestrator | + echo 2026-04-09 07:49:14.587846 | orchestrator | + echo '# Status of Elasticsearch' 2026-04-09 07:49:14.587863 | orchestrator | + echo 2026-04-09 07:49:14.587880 | orchestrator | 2026-04-09 07:49:14.587899 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-04-09 07:49:14.792732 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-04-09 07:49:14.793204 | orchestrator | 2026-04-09 07:49:14.793237 | orchestrator | # Status of MariaDB 2026-04-09 07:49:14.793251 | orchestrator | 2026-04-09 07:49:14.793263 | orchestrator | + echo 2026-04-09 07:49:14.793275 | orchestrator | + echo '# Status of MariaDB' 2026-04-09 07:49:14.793286 | orchestrator | + echo 2026-04-09 07:49:14.793638 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-09 07:49:14.859667 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-09 07:49:14.859758 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-09 07:49:14.859772 | orchestrator | + MARIADB_USER=root_shard_0 2026-04-09 07:49:14.859784 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-04-09 07:49:14.947650 | orchestrator | Reading package lists... 2026-04-09 07:49:15.339892 | orchestrator | Building dependency tree... 2026-04-09 07:49:15.342103 | orchestrator | Reading state information... 2026-04-09 07:49:15.729276 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-04-09 07:49:15.729454 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-04-09 07:49:16.429392 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-04-09 07:49:16.429496 | orchestrator | 2026-04-09 07:49:16.429512 | orchestrator | # Status of Prometheus 2026-04-09 07:49:16.429525 | orchestrator | 2026-04-09 07:49:16.429537 | orchestrator | + echo 2026-04-09 07:49:16.429548 | orchestrator | + echo '# Status of Prometheus' 2026-04-09 07:49:16.429560 | orchestrator | + echo 2026-04-09 07:49:16.429576 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-04-09 07:49:16.511703 | orchestrator | Unauthorized 2026-04-09 07:49:16.516120 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-04-09 07:49:16.576258 | orchestrator | Unauthorized 2026-04-09 07:49:16.579693 | orchestrator | 2026-04-09 07:49:16.579746 | orchestrator | # Status of RabbitMQ 2026-04-09 07:49:16.579758 | orchestrator | 2026-04-09 07:49:16.579767 | orchestrator | + echo 2026-04-09 07:49:16.579776 | orchestrator | + echo '# Status of RabbitMQ' 2026-04-09 07:49:16.579784 | orchestrator | + echo 2026-04-09 07:49:16.581247 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-09 07:49:16.638938 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-09 07:49:16.639033 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-09 07:49:16.639057 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-04-09 07:49:17.163584 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-04-09 07:49:17.176080 | orchestrator | 2026-04-09 07:49:17.176158 | orchestrator | # Status of Redis 2026-04-09 07:49:17.176171 | orchestrator | 2026-04-09 07:49:17.176181 | orchestrator | + echo 2026-04-09 07:49:17.176192 | orchestrator | + echo '# Status of Redis' 2026-04-09 07:49:17.176202 | orchestrator | + echo 2026-04-09 07:49:17.176213 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-04-09 07:49:17.183216 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001642s;;;0.000000;10.000000 2026-04-09 07:49:17.183271 | orchestrator | 2026-04-09 07:49:17.183286 | orchestrator | # Create backup of MariaDB database 2026-04-09 07:49:17.183299 | orchestrator | 2026-04-09 07:49:17.183348 | orchestrator | + popd 2026-04-09 07:49:17.183361 | orchestrator | + echo 2026-04-09 07:49:17.183372 | orchestrator | + echo '# Create backup of MariaDB database' 2026-04-09 07:49:17.183382 | orchestrator | + echo 2026-04-09 07:49:17.183394 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-04-09 07:49:18.469200 | orchestrator | 2026-04-09 07:49:18 | INFO  | Prepare task for execution of mariadb_backup. 2026-04-09 07:49:18.534476 | orchestrator | 2026-04-09 07:49:18 | INFO  | Task f8763ba9-de01-43a8-939d-0e3bdd09e588 (mariadb_backup) was prepared for execution. 2026-04-09 07:49:18.534570 | orchestrator | 2026-04-09 07:49:18 | INFO  | It takes a moment until task f8763ba9-de01-43a8-939d-0e3bdd09e588 (mariadb_backup) has been started and output is visible here. 2026-04-09 07:49:57.939020 | orchestrator | 2026-04-09 07:49:57.939137 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 07:49:57.939155 | orchestrator | 2026-04-09 07:49:57.939168 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 07:49:57.939179 | orchestrator | Thursday 09 April 2026 07:49:23 +0000 (0:00:01.421) 0:00:01.421 ******** 2026-04-09 07:49:57.939191 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:49:57.939203 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:49:57.939214 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:49:57.939225 | orchestrator | 2026-04-09 07:49:57.939236 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 07:49:57.939247 | orchestrator | Thursday 09 April 2026 07:49:25 +0000 (0:00:01.838) 0:00:03.260 ******** 2026-04-09 07:49:57.939258 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-09 07:49:57.939269 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-09 07:49:57.939338 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-09 07:49:57.939392 | orchestrator | 2026-04-09 07:49:57.939406 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-09 07:49:57.939417 | orchestrator | 2026-04-09 07:49:57.939428 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-09 07:49:57.939439 | orchestrator | Thursday 09 April 2026 07:49:29 +0000 (0:00:03.814) 0:00:07.074 ******** 2026-04-09 07:49:57.939450 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 07:49:57.939462 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-09 07:49:57.939472 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-09 07:49:57.939483 | orchestrator | 2026-04-09 07:49:57.939495 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-09 07:49:57.939506 | orchestrator | Thursday 09 April 2026 07:49:31 +0000 (0:00:02.874) 0:00:09.949 ******** 2026-04-09 07:49:57.939517 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 07:49:57.939529 | orchestrator | 2026-04-09 07:49:57.939540 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-04-09 07:49:57.939551 | orchestrator | Thursday 09 April 2026 07:49:33 +0000 (0:00:02.035) 0:00:11.985 ******** 2026-04-09 07:49:57.939563 | orchestrator | ok: [testbed-node-0] 2026-04-09 07:49:57.939577 | orchestrator | ok: [testbed-node-1] 2026-04-09 07:49:57.939651 | orchestrator | ok: [testbed-node-2] 2026-04-09 07:49:57.939672 | orchestrator | 2026-04-09 07:49:57.939691 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-04-09 07:49:57.939710 | orchestrator | Thursday 09 April 2026 07:49:38 +0000 (0:00:05.032) 0:00:17.018 ******** 2026-04-09 07:49:57.939729 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:49:57.939748 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:49:57.939767 | orchestrator | changed: [testbed-node-0] 2026-04-09 07:49:57.939785 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-09 07:49:57.939804 | orchestrator | 2026-04-09 07:49:57.939823 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-09 07:49:57.939840 | orchestrator | skipping: no hosts matched 2026-04-09 07:49:57.939859 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-09 07:49:57.939878 | orchestrator | 2026-04-09 07:49:57.939897 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-09 07:49:57.939916 | orchestrator | skipping: no hosts matched 2026-04-09 07:49:57.939934 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-09 07:49:57.939947 | orchestrator | mariadb_bootstrap_restart 2026-04-09 07:49:57.939960 | orchestrator | 2026-04-09 07:49:57.939971 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-09 07:49:57.939982 | orchestrator | skipping: no hosts matched 2026-04-09 07:49:57.939993 | orchestrator | 2026-04-09 07:49:57.940004 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-09 07:49:57.940015 | orchestrator | 2026-04-09 07:49:57.940025 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-09 07:49:57.940036 | orchestrator | Thursday 09 April 2026 07:49:53 +0000 (0:00:14.669) 0:00:31.687 ******** 2026-04-09 07:49:57.940046 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:49:57.940057 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:49:57.940068 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:49:57.940078 | orchestrator | 2026-04-09 07:49:57.940089 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-09 07:49:57.940100 | orchestrator | Thursday 09 April 2026 07:49:55 +0000 (0:00:01.549) 0:00:33.237 ******** 2026-04-09 07:49:57.940110 | orchestrator | skipping: [testbed-node-0] 2026-04-09 07:49:57.940121 | orchestrator | skipping: [testbed-node-1] 2026-04-09 07:49:57.940131 | orchestrator | skipping: [testbed-node-2] 2026-04-09 07:49:57.940142 | orchestrator | 2026-04-09 07:49:57.940153 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 07:49:57.940178 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 07:49:57.940190 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 07:49:57.940201 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 07:49:57.940212 | orchestrator | 2026-04-09 07:49:57.940222 | orchestrator | 2026-04-09 07:49:57.940250 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 07:49:57.940262 | orchestrator | Thursday 09 April 2026 07:49:57 +0000 (0:00:02.369) 0:00:35.607 ******** 2026-04-09 07:49:57.940273 | orchestrator | =============================================================================== 2026-04-09 07:49:57.940308 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 14.67s 2026-04-09 07:49:57.940342 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 5.03s 2026-04-09 07:49:57.940354 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.81s 2026-04-09 07:49:57.940365 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 2.88s 2026-04-09 07:49:57.940375 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 2.37s 2026-04-09 07:49:57.940386 | orchestrator | mariadb : include_tasks ------------------------------------------------- 2.04s 2026-04-09 07:49:57.940397 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.84s 2026-04-09 07:49:57.940408 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 1.55s 2026-04-09 07:49:58.117122 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-04-09 07:49:58.125975 | orchestrator | + set -e 2026-04-09 07:49:58.126085 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 07:49:58.126099 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 07:49:58.126109 | orchestrator | ++ INTERACTIVE=false 2026-04-09 07:49:58.126118 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 07:49:58.126195 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 07:49:58.126216 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-09 07:49:58.127400 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-09 07:49:58.134071 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-09 07:49:58.134127 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-09 07:49:58.134143 | orchestrator | + export OS_CLOUD=admin 2026-04-09 07:49:58.134156 | orchestrator | + OS_CLOUD=admin 2026-04-09 07:49:58.134169 | orchestrator | + echo 2026-04-09 07:49:58.134182 | orchestrator | 2026-04-09 07:49:58.134196 | orchestrator | # OpenStack endpoints 2026-04-09 07:49:58.134209 | orchestrator | 2026-04-09 07:49:58.134222 | orchestrator | + echo '# OpenStack endpoints' 2026-04-09 07:49:58.134235 | orchestrator | + echo 2026-04-09 07:49:58.134248 | orchestrator | + openstack endpoint list 2026-04-09 07:50:01.151194 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-09 07:50:01.151341 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-04-09 07:50:01.151359 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-09 07:50:01.151370 | orchestrator | | 01198f1f1b394395ae7d631ae91b24f0 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-04-09 07:50:01.151381 | orchestrator | | 1756fca98f3948f790ae1155e765f596 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-04-09 07:50:01.151392 | orchestrator | | 1d56884561b645b38550ba654ecdfeba | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-04-09 07:50:01.151432 | orchestrator | | 26748ab3034643b2aa9c617cc05e04c8 | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-04-09 07:50:01.151444 | orchestrator | | 3f09e594988e411eabb05aca176f9b46 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-04-09 07:50:01.151455 | orchestrator | | 450c268822ff4be986c9edb5bca19ec6 | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-04-09 07:50:01.151466 | orchestrator | | 494d04b6010c4407ba8bc5f6f9cccb12 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-04-09 07:50:01.151476 | orchestrator | | 4cff745d900c4b7fa107ca5e90f8fb00 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-04-09 07:50:01.151488 | orchestrator | | 581f9361869a4a60b70e42d1a38c1c15 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-04-09 07:50:01.151528 | orchestrator | | 5ee962acb09640e18e996431623b1b0d | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-09 07:50:01.151540 | orchestrator | | 63ee523ffba44f14b4e1b5eab789f703 | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-04-09 07:50:01.151552 | orchestrator | | 644c82f692a546769acb48c32227647a | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-04-09 07:50:01.151563 | orchestrator | | 75cf4d946075411b8387be645ae040de | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-04-09 07:50:01.151574 | orchestrator | | 795db67106ab465b907079d7a45fe954 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-04-09 07:50:01.151584 | orchestrator | | 7b71b8ed10ab4c72a7f2ca008bd7d34f | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-04-09 07:50:01.151595 | orchestrator | | 87c948e37ee449ef839d7b1b02f588f4 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-04-09 07:50:01.151606 | orchestrator | | 8ad355ad85694f4ca13519a184b0b55c | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-09 07:50:01.151617 | orchestrator | | 8cdc8e37d3ea4cb6a782dd109cb6fa63 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-04-09 07:50:01.151628 | orchestrator | | 8f3a0a6b4b374aa48e4a521db8b47058 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-04-09 07:50:01.151654 | orchestrator | | ad85df22d54940a7a8b2f3c30eca116a | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-04-09 07:50:01.151683 | orchestrator | | af3e9b6cc57f4edd8b6fa6427fe79dde | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-04-09 07:50:01.151695 | orchestrator | | be6bac4b55a04670b170964301d8247b | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-04-09 07:50:01.151707 | orchestrator | | bfd2168f7891445c8ca1c0f557a7cc14 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-04-09 07:50:01.151727 | orchestrator | | c5ffe25b1e194c9b978b6a0a9410d6ca | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-09 07:50:01.151742 | orchestrator | | eb1264ab1ed34ba9bc459033843eae33 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-04-09 07:50:01.151755 | orchestrator | | ef903724adcc4ed2bb3bdb0394e7a2af | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-09 07:50:01.151767 | orchestrator | | f0b7aba7fb2c4448aed52713173f4f62 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-09 07:50:01.151780 | orchestrator | | f0ecc2a665db4687ba89598f79a9c45d | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-09 07:50:01.151793 | orchestrator | | f233ca1285d1417c98046ccaf16f00fb | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-04-09 07:50:01.151806 | orchestrator | | f7523afcaa4c43a6a5d02636e6321eb5 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-04-09 07:50:01.151819 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-09 07:50:01.391154 | orchestrator | 2026-04-09 07:50:01.391256 | orchestrator | # Cinder 2026-04-09 07:50:01.391271 | orchestrator | 2026-04-09 07:50:01.391320 | orchestrator | + echo 2026-04-09 07:50:01.391333 | orchestrator | + echo '# Cinder' 2026-04-09 07:50:01.391344 | orchestrator | + echo 2026-04-09 07:50:01.391356 | orchestrator | + openstack volume service list 2026-04-09 07:50:04.049499 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-09 07:50:04.049608 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-04-09 07:50:04.049624 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-09 07:50:04.049636 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-09T07:49:58.000000 | 2026-04-09 07:50:04.049648 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-09T07:49:57.000000 | 2026-04-09 07:50:04.049659 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-09T07:49:57.000000 | 2026-04-09 07:50:04.049670 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-04-09T07:49:57.000000 | 2026-04-09 07:50:04.049681 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-04-09T07:49:54.000000 | 2026-04-09 07:50:04.049692 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-04-09T07:50:01.000000 | 2026-04-09 07:50:04.049703 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-04-09T07:49:59.000000 | 2026-04-09 07:50:04.049714 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-04-09T07:49:56.000000 | 2026-04-09 07:50:04.049725 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-04-09T07:49:59.000000 | 2026-04-09 07:50:04.049736 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-09 07:50:04.316913 | orchestrator | 2026-04-09 07:50:04.317007 | orchestrator | # Neutron 2026-04-09 07:50:04.317022 | orchestrator | 2026-04-09 07:50:04.317035 | orchestrator | + echo 2026-04-09 07:50:04.317046 | orchestrator | + echo '# Neutron' 2026-04-09 07:50:04.317059 | orchestrator | + echo 2026-04-09 07:50:04.317070 | orchestrator | + openstack network agent list 2026-04-09 07:50:07.083613 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-09 07:50:07.083703 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-04-09 07:50:07.083710 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-09 07:50:07.083726 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-04-09 07:50:07.083731 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-04-09 07:50:07.083736 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-09 07:50:07.083743 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-04-09 07:50:07.083749 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-09 07:50:07.083755 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-04-09 07:50:07.083761 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-04-09 07:50:07.083767 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-09 07:50:07.083772 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-04-09 07:50:07.083777 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-09 07:50:07.353151 | orchestrator | + openstack network service provider list 2026-04-09 07:50:09.673912 | orchestrator | +---------------+------+---------+ 2026-04-09 07:50:09.674004 | orchestrator | | Service Type | Name | Default | 2026-04-09 07:50:09.674074 | orchestrator | +---------------+------+---------+ 2026-04-09 07:50:09.674087 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-04-09 07:50:09.674098 | orchestrator | +---------------+------+---------+ 2026-04-09 07:50:09.838590 | orchestrator | 2026-04-09 07:50:09.838667 | orchestrator | # Nova 2026-04-09 07:50:09.838681 | orchestrator | 2026-04-09 07:50:09.838692 | orchestrator | + echo 2026-04-09 07:50:09.838702 | orchestrator | + echo '# Nova' 2026-04-09 07:50:09.838712 | orchestrator | + echo 2026-04-09 07:50:09.838723 | orchestrator | + openstack compute service list 2026-04-09 07:50:12.558670 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-09 07:50:12.558754 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-04-09 07:50:12.558764 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-09 07:50:12.558772 | orchestrator | | 31c51113-b50e-4615-80c2-a2afe4b5f2dc | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-09T07:50:04.000000 | 2026-04-09 07:50:12.558779 | orchestrator | | df662cc0-a598-4354-a0bb-ddc353663aa6 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-09T07:50:04.000000 | 2026-04-09 07:50:12.558787 | orchestrator | | 9b18bfeb-f5a8-4e14-b1f1-9399e2034902 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-09T07:50:03.000000 | 2026-04-09 07:50:12.558795 | orchestrator | | 0979890e-5870-420f-b954-98e42363db9c | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-04-09T07:50:11.000000 | 2026-04-09 07:50:12.558827 | orchestrator | | e0cbf10c-6786-48b0-8f70-5b76cf6fa7d5 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-04-09T07:50:02.000000 | 2026-04-09 07:50:12.558836 | orchestrator | | 32743f2d-6ac0-4639-a913-b612dcdeaf80 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-04-09T07:50:11.000000 | 2026-04-09 07:50:12.558844 | orchestrator | | 85262c24-7696-493c-8aae-3dd7b32dade9 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-04-09T07:50:11.000000 | 2026-04-09 07:50:12.558851 | orchestrator | | 4942290a-f485-4952-bfa7-f0e07c15f3ac | nova-compute | testbed-node-3 | nova | enabled | up | 2026-04-09T07:50:03.000000 | 2026-04-09 07:50:12.558858 | orchestrator | | 6b529cbb-0f2b-41ff-a263-2cca7c269cab | nova-compute | testbed-node-4 | nova | enabled | up | 2026-04-09T07:50:03.000000 | 2026-04-09 07:50:12.558866 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-09 07:50:12.806964 | orchestrator | + openstack hypervisor list 2026-04-09 07:50:16.037803 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-09 07:50:16.037910 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-04-09 07:50:16.037925 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-09 07:50:16.037937 | orchestrator | | cbe61515-e3c9-4d95-b064-c84b7292b51e | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-04-09 07:50:16.037948 | orchestrator | | 998ed68f-9357-49e9-872d-d4b4b5f51e4b | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-04-09 07:50:16.037978 | orchestrator | | fe125c42-7366-4004-bfac-b91e256bacce | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-04-09 07:50:16.037989 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-09 07:50:16.302489 | orchestrator | 2026-04-09 07:50:16.302586 | orchestrator | # Run OpenStack test play 2026-04-09 07:50:16.302603 | orchestrator | 2026-04-09 07:50:16.302615 | orchestrator | + echo 2026-04-09 07:50:16.302627 | orchestrator | + echo '# Run OpenStack test play' 2026-04-09 07:50:16.302639 | orchestrator | + echo 2026-04-09 07:50:16.302650 | orchestrator | + osism apply --environment openstack test 2026-04-09 07:50:17.546541 | orchestrator | 2026-04-09 07:50:17 | INFO  | Trying to run play test in environment openstack 2026-04-09 07:50:27.626983 | orchestrator | 2026-04-09 07:50:27 | INFO  | Prepare task for execution of test. 2026-04-09 07:50:27.705628 | orchestrator | 2026-04-09 07:50:27 | INFO  | Task 88a7cd0a-4b15-4439-b976-8e17ff69fda8 (test) was prepared for execution. 2026-04-09 07:50:27.705753 | orchestrator | 2026-04-09 07:50:27 | INFO  | It takes a moment until task 88a7cd0a-4b15-4439-b976-8e17ff69fda8 (test) has been started and output is visible here. 2026-04-09 07:53:06.018903 | orchestrator | 2026-04-09 07:53:06.019099 | orchestrator | PLAY [Create test project] ***************************************************** 2026-04-09 07:53:06.019123 | orchestrator | 2026-04-09 07:53:06.019136 | orchestrator | TASK [Create test domain] ****************************************************** 2026-04-09 07:53:06.019149 | orchestrator | Thursday 09 April 2026 07:50:32 +0000 (0:00:01.299) 0:00:01.299 ******** 2026-04-09 07:53:06.019161 | orchestrator | ok: [localhost] 2026-04-09 07:53:06.019240 | orchestrator | 2026-04-09 07:53:06.019257 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-04-09 07:53:06.019269 | orchestrator | Thursday 09 April 2026 07:50:38 +0000 (0:00:06.193) 0:00:07.493 ******** 2026-04-09 07:53:06.019281 | orchestrator | ok: [localhost] 2026-04-09 07:53:06.019292 | orchestrator | 2026-04-09 07:53:06.019304 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-04-09 07:53:06.019315 | orchestrator | Thursday 09 April 2026 07:50:43 +0000 (0:00:05.076) 0:00:12.569 ******** 2026-04-09 07:53:06.019327 | orchestrator | changed: [localhost] 2026-04-09 07:53:06.019338 | orchestrator | 2026-04-09 07:53:06.019350 | orchestrator | TASK [Create test project] ***************************************************** 2026-04-09 07:53:06.019394 | orchestrator | Thursday 09 April 2026 07:50:53 +0000 (0:00:09.431) 0:00:22.001 ******** 2026-04-09 07:53:06.019409 | orchestrator | ok: [localhost] 2026-04-09 07:53:06.019422 | orchestrator | 2026-04-09 07:53:06.019436 | orchestrator | TASK [Create test user] ******************************************************** 2026-04-09 07:53:06.019450 | orchestrator | Thursday 09 April 2026 07:50:58 +0000 (0:00:05.045) 0:00:27.047 ******** 2026-04-09 07:53:06.019461 | orchestrator | ok: [localhost] 2026-04-09 07:53:06.019472 | orchestrator | 2026-04-09 07:53:06.019484 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-04-09 07:53:06.019495 | orchestrator | Thursday 09 April 2026 07:51:03 +0000 (0:00:05.088) 0:00:32.136 ******** 2026-04-09 07:53:06.019506 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-04-09 07:53:06.019517 | orchestrator | ok: [localhost] => (item=member) 2026-04-09 07:53:06.019530 | orchestrator | changed: [localhost] => (item=creator) 2026-04-09 07:53:06.019541 | orchestrator | 2026-04-09 07:53:06.019553 | orchestrator | TASK [Create test server group] ************************************************ 2026-04-09 07:53:06.019564 | orchestrator | Thursday 09 April 2026 07:51:16 +0000 (0:00:13.224) 0:00:45.360 ******** 2026-04-09 07:53:06.019575 | orchestrator | ok: [localhost] 2026-04-09 07:53:06.019586 | orchestrator | 2026-04-09 07:53:06.019596 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-04-09 07:53:06.019607 | orchestrator | Thursday 09 April 2026 07:51:21 +0000 (0:00:05.317) 0:00:50.678 ******** 2026-04-09 07:53:06.019618 | orchestrator | ok: [localhost] 2026-04-09 07:53:06.019629 | orchestrator | 2026-04-09 07:53:06.019640 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-04-09 07:53:06.019651 | orchestrator | Thursday 09 April 2026 07:51:27 +0000 (0:00:05.104) 0:00:55.782 ******** 2026-04-09 07:53:06.019662 | orchestrator | ok: [localhost] 2026-04-09 07:53:06.019672 | orchestrator | 2026-04-09 07:53:06.019683 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-04-09 07:53:06.019694 | orchestrator | Thursday 09 April 2026 07:51:32 +0000 (0:00:05.057) 0:01:00.840 ******** 2026-04-09 07:53:06.019705 | orchestrator | ok: [localhost] 2026-04-09 07:53:06.019716 | orchestrator | 2026-04-09 07:53:06.019727 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-04-09 07:53:06.019737 | orchestrator | Thursday 09 April 2026 07:51:37 +0000 (0:00:04.919) 0:01:05.760 ******** 2026-04-09 07:53:06.019749 | orchestrator | ok: [localhost] 2026-04-09 07:53:06.019760 | orchestrator | 2026-04-09 07:53:06.019770 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-04-09 07:53:06.019781 | orchestrator | Thursday 09 April 2026 07:51:41 +0000 (0:00:04.810) 0:01:10.570 ******** 2026-04-09 07:53:06.019792 | orchestrator | ok: [localhost] 2026-04-09 07:53:06.019803 | orchestrator | 2026-04-09 07:53:06.019814 | orchestrator | TASK [Create test networks] **************************************************** 2026-04-09 07:53:06.019824 | orchestrator | Thursday 09 April 2026 07:51:46 +0000 (0:00:04.846) 0:01:15.417 ******** 2026-04-09 07:53:06.019835 | orchestrator | ok: [localhost] => (item={'name': 'test-1'}) 2026-04-09 07:53:06.019847 | orchestrator | ok: [localhost] => (item={'name': 'test-2'}) 2026-04-09 07:53:06.019857 | orchestrator | ok: [localhost] => (item={'name': 'test-3'}) 2026-04-09 07:53:06.019868 | orchestrator | 2026-04-09 07:53:06.019879 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-04-09 07:53:06.019891 | orchestrator | Thursday 09 April 2026 07:51:59 +0000 (0:00:12.585) 0:01:28.003 ******** 2026-04-09 07:53:06.019902 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-04-09 07:53:06.019914 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-04-09 07:53:06.019944 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-04-09 07:53:06.019955 | orchestrator | 2026-04-09 07:53:06.019975 | orchestrator | TASK [Create test routers] ***************************************************** 2026-04-09 07:53:06.019986 | orchestrator | Thursday 09 April 2026 07:52:12 +0000 (0:00:12.850) 0:01:40.854 ******** 2026-04-09 07:53:06.019996 | orchestrator | ok: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-04-09 07:53:06.020009 | orchestrator | ok: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-04-09 07:53:06.020020 | orchestrator | ok: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-04-09 07:53:06.020038 | orchestrator | 2026-04-09 07:53:06.020056 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-04-09 07:53:06.020073 | orchestrator | 2026-04-09 07:53:06.020091 | orchestrator | TASK [Get test server group] *************************************************** 2026-04-09 07:53:06.020111 | orchestrator | Thursday 09 April 2026 07:52:28 +0000 (0:00:16.393) 0:01:57.248 ******** 2026-04-09 07:53:06.020129 | orchestrator | ok: [localhost] 2026-04-09 07:53:06.020149 | orchestrator | 2026-04-09 07:53:06.020222 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-04-09 07:53:06.020235 | orchestrator | Thursday 09 April 2026 07:52:33 +0000 (0:00:04.847) 0:02:02.095 ******** 2026-04-09 07:53:06.020246 | orchestrator | skipping: [localhost] 2026-04-09 07:53:06.020257 | orchestrator | 2026-04-09 07:53:06.020276 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-04-09 07:53:06.020294 | orchestrator | Thursday 09 April 2026 07:52:34 +0000 (0:00:01.154) 0:02:03.249 ******** 2026-04-09 07:53:06.020313 | orchestrator | skipping: [localhost] 2026-04-09 07:53:06.020332 | orchestrator | 2026-04-09 07:53:06.020351 | orchestrator | TASK [Delete test instances] *************************************************** 2026-04-09 07:53:06.020367 | orchestrator | Thursday 09 April 2026 07:52:35 +0000 (0:00:01.198) 0:02:04.448 ******** 2026-04-09 07:53:06.020377 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-04-09 07:53:06.020388 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-04-09 07:53:06.020400 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-04-09 07:53:06.020419 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-04-09 07:53:06.020438 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-04-09 07:53:06.020456 | orchestrator | skipping: [localhost] 2026-04-09 07:53:06.020476 | orchestrator | 2026-04-09 07:53:06.020487 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-04-09 07:53:06.020498 | orchestrator | Thursday 09 April 2026 07:52:37 +0000 (0:00:01.287) 0:02:05.736 ******** 2026-04-09 07:53:06.020509 | orchestrator | skipping: [localhost] 2026-04-09 07:53:06.020520 | orchestrator | 2026-04-09 07:53:06.020531 | orchestrator | TASK [Create test instances] *************************************************** 2026-04-09 07:53:06.020541 | orchestrator | Thursday 09 April 2026 07:52:38 +0000 (0:00:01.215) 0:02:06.951 ******** 2026-04-09 07:53:06.020552 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-09 07:53:06.020563 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-09 07:53:06.020574 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-09 07:53:06.020585 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-09 07:53:06.020596 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-09 07:53:06.020607 | orchestrator | 2026-04-09 07:53:06.020617 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-04-09 07:53:06.020628 | orchestrator | Thursday 09 April 2026 07:52:44 +0000 (0:00:05.879) 0:02:12.830 ******** 2026-04-09 07:53:06.020639 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-09 07:53:06.020653 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j151145419228.4206', 'results_file': '/ansible/.ansible_async/j151145419228.4206', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-09 07:53:06.020678 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j593099108112.4231', 'results_file': '/ansible/.ansible_async/j593099108112.4231', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-09 07:53:06.020690 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j820000912784.4256', 'results_file': '/ansible/.ansible_async/j820000912784.4256', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-09 07:53:06.020701 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j348043590206.4281', 'results_file': '/ansible/.ansible_async/j348043590206.4281', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-09 07:53:06.020712 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j388613392290.4306', 'results_file': '/ansible/.ansible_async/j388613392290.4306', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-09 07:53:06.020723 | orchestrator | 2026-04-09 07:53:06.020735 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-04-09 07:53:06.020746 | orchestrator | Thursday 09 April 2026 07:53:00 +0000 (0:00:15.942) 0:02:28.773 ******** 2026-04-09 07:53:06.020757 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-09 07:53:06.020768 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-09 07:53:06.020779 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-09 07:53:06.020790 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-09 07:53:06.020801 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-09 07:53:06.020811 | orchestrator | 2026-04-09 07:53:06.020822 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-04-09 07:53:06.020841 | orchestrator | Thursday 09 April 2026 07:53:06 +0000 (0:00:05.964) 0:02:34.738 ******** 2026-04-09 07:54:07.331528 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j123274799703.4377', 'results_file': '/ansible/.ansible_async/j123274799703.4377', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-09 07:54:07.331671 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j602289415958.4402', 'results_file': '/ansible/.ansible_async/j602289415958.4402', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-09 07:54:07.331697 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j817416243351.4427', 'results_file': '/ansible/.ansible_async/j817416243351.4427', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-09 07:54:07.331717 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j879989592427.4452', 'results_file': '/ansible/.ansible_async/j879989592427.4452', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-09 07:54:07.331736 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j276869141019.4477', 'results_file': '/ansible/.ansible_async/j276869141019.4477', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-09 07:54:07.331753 | orchestrator | 2026-04-09 07:54:07.331804 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-04-09 07:54:07.331825 | orchestrator | Thursday 09 April 2026 07:53:11 +0000 (0:00:05.248) 0:02:39.986 ******** 2026-04-09 07:54:07.331842 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-09 07:54:07.331857 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-09 07:54:07.331873 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-09 07:54:07.331890 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-09 07:54:07.331907 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-09 07:54:07.331923 | orchestrator | 2026-04-09 07:54:07.331941 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-04-09 07:54:07.331978 | orchestrator | Thursday 09 April 2026 07:53:17 +0000 (0:00:05.883) 0:02:45.870 ******** 2026-04-09 07:54:07.331996 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-04-09 07:54:07.332015 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j889874291591.4548', 'results_file': '/ansible/.ansible_async/j889874291591.4548', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-09 07:54:07.332034 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j329300126584.4573', 'results_file': '/ansible/.ansible_async/j329300126584.4573', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-09 07:54:07.332053 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j700733091086.4599', 'results_file': '/ansible/.ansible_async/j700733091086.4599', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-09 07:54:07.332078 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j211295763540.4625', 'results_file': '/ansible/.ansible_async/j211295763540.4625', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-09 07:54:07.332099 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j984824117502.4651', 'results_file': '/ansible/.ansible_async/j984824117502.4651', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-09 07:54:07.332117 | orchestrator | 2026-04-09 07:54:07.332133 | orchestrator | TASK [Create test volume] ****************************************************** 2026-04-09 07:54:07.332197 | orchestrator | Thursday 09 April 2026 07:53:28 +0000 (0:00:11.456) 0:02:57.326 ******** 2026-04-09 07:54:07.332216 | orchestrator | ok: [localhost] 2026-04-09 07:54:07.332236 | orchestrator | 2026-04-09 07:54:07.332254 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-04-09 07:54:07.332270 | orchestrator | Thursday 09 April 2026 07:53:33 +0000 (0:00:04.987) 0:03:02.314 ******** 2026-04-09 07:54:07.332288 | orchestrator | ok: [localhost] 2026-04-09 07:54:07.332305 | orchestrator | 2026-04-09 07:54:07.332322 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-04-09 07:54:07.332360 | orchestrator | Thursday 09 April 2026 07:53:39 +0000 (0:00:05.898) 0:03:08.213 ******** 2026-04-09 07:54:07.332380 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-09 07:54:07.332398 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-09 07:54:07.332415 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-09 07:54:07.332432 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-09 07:54:07.332448 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-09 07:54:07.332480 | orchestrator | 2026-04-09 07:54:07.332498 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-04-09 07:54:07.332516 | orchestrator | Thursday 09 April 2026 07:54:05 +0000 (0:00:25.923) 0:03:34.137 ******** 2026-04-09 07:54:07.332533 | orchestrator | ok: [localhost] => (item=test) => { 2026-04-09 07:54:07.332548 | orchestrator |  "msg": "test: 192.168.112.181" 2026-04-09 07:54:07.332561 | orchestrator | } 2026-04-09 07:54:07.332576 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-04-09 07:54:07.332591 | orchestrator |  "msg": "test-1: 192.168.112.116" 2026-04-09 07:54:07.332605 | orchestrator | } 2026-04-09 07:54:07.332638 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-04-09 07:54:07.332663 | orchestrator |  "msg": "test-2: 192.168.112.137" 2026-04-09 07:54:07.332677 | orchestrator | } 2026-04-09 07:54:07.332692 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-04-09 07:54:07.332707 | orchestrator |  "msg": "test-3: 192.168.112.125" 2026-04-09 07:54:07.332721 | orchestrator | } 2026-04-09 07:54:07.332734 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-04-09 07:54:07.332747 | orchestrator |  "msg": "test-4: 192.168.112.154" 2026-04-09 07:54:07.332761 | orchestrator | } 2026-04-09 07:54:07.332775 | orchestrator | 2026-04-09 07:54:07.332817 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 07:54:07.332833 | orchestrator | localhost : ok=26  changed=8  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 07:54:07.332848 | orchestrator | 2026-04-09 07:54:07.332862 | orchestrator | 2026-04-09 07:54:07.332876 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 07:54:07.332891 | orchestrator | Thursday 09 April 2026 07:54:07 +0000 (0:00:01.662) 0:03:35.800 ******** 2026-04-09 07:54:07.332906 | orchestrator | =============================================================================== 2026-04-09 07:54:07.332920 | orchestrator | Create floating ip addresses ------------------------------------------- 25.92s 2026-04-09 07:54:07.332933 | orchestrator | Create test routers ---------------------------------------------------- 16.39s 2026-04-09 07:54:07.332946 | orchestrator | Wait for instance creation to complete --------------------------------- 15.94s 2026-04-09 07:54:07.332959 | orchestrator | Add member roles to user test ------------------------------------------ 13.22s 2026-04-09 07:54:07.332974 | orchestrator | Create test subnets ---------------------------------------------------- 12.85s 2026-04-09 07:54:07.332989 | orchestrator | Create test networks --------------------------------------------------- 12.59s 2026-04-09 07:54:07.333003 | orchestrator | Wait for tags to be added ---------------------------------------------- 11.46s 2026-04-09 07:54:07.333018 | orchestrator | Add manager role to user test-admin ------------------------------------- 9.43s 2026-04-09 07:54:07.333031 | orchestrator | Create test domain ------------------------------------------------------ 6.19s 2026-04-09 07:54:07.333044 | orchestrator | Add metadata to instances ----------------------------------------------- 5.96s 2026-04-09 07:54:07.333059 | orchestrator | Attach test volume ------------------------------------------------------ 5.90s 2026-04-09 07:54:07.333073 | orchestrator | Add tag to instances ---------------------------------------------------- 5.88s 2026-04-09 07:54:07.333086 | orchestrator | Create test instances --------------------------------------------------- 5.88s 2026-04-09 07:54:07.333099 | orchestrator | Create test server group ------------------------------------------------ 5.32s 2026-04-09 07:54:07.333111 | orchestrator | Wait for metadata to be added ------------------------------------------- 5.25s 2026-04-09 07:54:07.333124 | orchestrator | Create ssh security group ----------------------------------------------- 5.10s 2026-04-09 07:54:07.333138 | orchestrator | Create test user -------------------------------------------------------- 5.09s 2026-04-09 07:54:07.333171 | orchestrator | Create test-admin user -------------------------------------------------- 5.08s 2026-04-09 07:54:07.333186 | orchestrator | Add rule to ssh security group ------------------------------------------ 5.06s 2026-04-09 07:54:07.333199 | orchestrator | Create test project ----------------------------------------------------- 5.05s 2026-04-09 07:54:07.504393 | orchestrator | + server_list 2026-04-09 07:54:07.504507 | orchestrator | + openstack --os-cloud test server list 2026-04-09 07:54:11.161525 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-09 07:54:11.161628 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-04-09 07:54:11.161645 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-09 07:54:11.161656 | orchestrator | | ba1519ae-7195-464e-8d49-84a6a2a905b7 | test-4 | ACTIVE | test-3=192.168.112.154, 192.168.202.116 | N/A (booted from volume) | SCS-1L-1 | 2026-04-09 07:54:11.161667 | orchestrator | | a4932245-1532-4210-8014-5f31e894606e | test-3 | ACTIVE | test-2=192.168.112.125, 192.168.201.151 | N/A (booted from volume) | SCS-1L-1 | 2026-04-09 07:54:11.161679 | orchestrator | | 0c6b4412-9812-4152-9c50-5ceb4b4c1853 | test-1 | ACTIVE | test-1=192.168.112.116, 192.168.200.93 | N/A (booted from volume) | SCS-1L-1 | 2026-04-09 07:54:11.161690 | orchestrator | | 9e5b4591-beda-4eff-9b2d-a5e9a114870a | test-2 | ACTIVE | test-2=192.168.112.137, 192.168.201.214 | N/A (booted from volume) | SCS-1L-1 | 2026-04-09 07:54:11.161701 | orchestrator | | 61747fe2-2e4c-4212-9d39-9cc958a5fad0 | test | ACTIVE | test-1=192.168.112.181, 192.168.200.66 | N/A (booted from volume) | SCS-1L-1 | 2026-04-09 07:54:11.161712 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-09 07:54:11.425423 | orchestrator | + openstack --os-cloud test server show test 2026-04-09 07:54:14.734943 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 07:54:14.735072 | orchestrator | | Field | Value | 2026-04-09 07:54:14.735091 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 07:54:14.735104 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-09 07:54:14.735116 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-09 07:54:14.735128 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-09 07:54:14.735208 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-04-09 07:54:14.735223 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-09 07:54:14.735235 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-09 07:54:14.735266 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-09 07:54:14.735278 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-09 07:54:14.735290 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-09 07:54:14.735301 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-09 07:54:14.735312 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-09 07:54:14.735323 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-09 07:54:14.735344 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-09 07:54:14.735360 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-09 07:54:14.735372 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-09 07:54:14.735383 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-09T04:16:31.000000 | 2026-04-09 07:54:14.735402 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-09 07:54:14.735414 | orchestrator | | accessIPv4 | | 2026-04-09 07:54:14.735425 | orchestrator | | accessIPv6 | | 2026-04-09 07:54:14.735436 | orchestrator | | addresses | test-1=192.168.112.181, 192.168.200.66 | 2026-04-09 07:54:14.735447 | orchestrator | | config_drive | | 2026-04-09 07:54:14.735465 | orchestrator | | created | 2026-04-09T04:16:04Z | 2026-04-09 07:54:14.735476 | orchestrator | | description | None | 2026-04-09 07:54:14.735492 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-09 07:54:14.735504 | orchestrator | | hostId | e6a4f3ec81f73649a6d605835ca17df687b1765525d632ba291ccfd0 | 2026-04-09 07:54:14.735515 | orchestrator | | host_status | None | 2026-04-09 07:54:14.735533 | orchestrator | | id | 61747fe2-2e4c-4212-9d39-9cc958a5fad0 | 2026-04-09 07:54:14.735545 | orchestrator | | image | N/A (booted from volume) | 2026-04-09 07:54:14.735556 | orchestrator | | key_name | test | 2026-04-09 07:54:14.735567 | orchestrator | | locked | False | 2026-04-09 07:54:14.735579 | orchestrator | | locked_reason | None | 2026-04-09 07:54:14.735596 | orchestrator | | name | test | 2026-04-09 07:54:14.735608 | orchestrator | | pinned_availability_zone | None | 2026-04-09 07:54:14.735623 | orchestrator | | progress | 0 | 2026-04-09 07:54:14.735635 | orchestrator | | project_id | ca7e15a8f29a424baf84073e4157a711 | 2026-04-09 07:54:14.735646 | orchestrator | | properties | hostname='test' | 2026-04-09 07:54:14.735664 | orchestrator | | security_groups | name='ssh' | 2026-04-09 07:54:14.735676 | orchestrator | | | name='icmp' | 2026-04-09 07:54:14.735687 | orchestrator | | server_groups | None | 2026-04-09 07:54:14.735698 | orchestrator | | status | ACTIVE | 2026-04-09 07:54:14.735715 | orchestrator | | tags | test | 2026-04-09 07:54:14.735727 | orchestrator | | trusted_image_certificates | None | 2026-04-09 07:54:14.735738 | orchestrator | | updated | 2026-04-09T07:53:06Z | 2026-04-09 07:54:14.735749 | orchestrator | | user_id | 6835449df2ee44bd8d6d112412f3f2ea | 2026-04-09 07:54:14.735761 | orchestrator | | volumes_attached | delete_on_termination='True', id='d03324af-d95d-45d2-b3ed-3b75cdb94dcf' | 2026-04-09 07:54:14.735772 | orchestrator | | | delete_on_termination='False', id='e352b469-8578-444e-a62d-fced9b687e85' | 2026-04-09 07:54:14.739717 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 07:54:15.021125 | orchestrator | + openstack --os-cloud test server show test-1 2026-04-09 07:54:17.916653 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 07:54:17.916875 | orchestrator | | Field | Value | 2026-04-09 07:54:17.916912 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 07:54:17.916931 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-09 07:54:17.916943 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-09 07:54:17.916954 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-09 07:54:17.916965 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-04-09 07:54:17.916976 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-09 07:54:17.916988 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-09 07:54:17.917020 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-09 07:54:17.917033 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-09 07:54:17.917051 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-09 07:54:17.917062 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-09 07:54:17.917078 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-09 07:54:17.917090 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-09 07:54:17.917101 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-09 07:54:17.917112 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-09 07:54:17.917123 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-09 07:54:17.917134 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-09T04:16:31.000000 | 2026-04-09 07:54:17.917183 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-09 07:54:17.917203 | orchestrator | | accessIPv4 | | 2026-04-09 07:54:17.917214 | orchestrator | | accessIPv6 | | 2026-04-09 07:54:17.917225 | orchestrator | | addresses | test-1=192.168.112.116, 192.168.200.93 | 2026-04-09 07:54:17.917241 | orchestrator | | config_drive | | 2026-04-09 07:54:17.917253 | orchestrator | | created | 2026-04-09T04:16:06Z | 2026-04-09 07:54:17.917264 | orchestrator | | description | None | 2026-04-09 07:54:17.917275 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-09 07:54:17.917286 | orchestrator | | hostId | e6a4f3ec81f73649a6d605835ca17df687b1765525d632ba291ccfd0 | 2026-04-09 07:54:17.917297 | orchestrator | | host_status | None | 2026-04-09 07:54:17.917316 | orchestrator | | id | 0c6b4412-9812-4152-9c50-5ceb4b4c1853 | 2026-04-09 07:54:17.917335 | orchestrator | | image | N/A (booted from volume) | 2026-04-09 07:54:17.917346 | orchestrator | | key_name | test | 2026-04-09 07:54:17.917357 | orchestrator | | locked | False | 2026-04-09 07:54:17.917373 | orchestrator | | locked_reason | None | 2026-04-09 07:54:17.917384 | orchestrator | | name | test-1 | 2026-04-09 07:54:17.917396 | orchestrator | | pinned_availability_zone | None | 2026-04-09 07:54:17.917409 | orchestrator | | progress | 0 | 2026-04-09 07:54:17.917422 | orchestrator | | project_id | ca7e15a8f29a424baf84073e4157a711 | 2026-04-09 07:54:17.917435 | orchestrator | | properties | hostname='test-1' | 2026-04-09 07:54:17.917462 | orchestrator | | security_groups | name='ssh' | 2026-04-09 07:54:17.917475 | orchestrator | | | name='icmp' | 2026-04-09 07:54:17.917488 | orchestrator | | server_groups | None | 2026-04-09 07:54:17.917507 | orchestrator | | status | ACTIVE | 2026-04-09 07:54:17.917520 | orchestrator | | tags | test | 2026-04-09 07:54:17.917533 | orchestrator | | trusted_image_certificates | None | 2026-04-09 07:54:17.917546 | orchestrator | | updated | 2026-04-09T07:53:06Z | 2026-04-09 07:54:17.917558 | orchestrator | | user_id | 6835449df2ee44bd8d6d112412f3f2ea | 2026-04-09 07:54:17.917571 | orchestrator | | volumes_attached | delete_on_termination='True', id='3effe481-0984-41be-85c1-c77244e7c318' | 2026-04-09 07:54:17.917590 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 07:54:18.178097 | orchestrator | + openstack --os-cloud test server show test-2 2026-04-09 07:54:21.041306 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 07:54:21.041401 | orchestrator | | Field | Value | 2026-04-09 07:54:21.041416 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 07:54:21.041437 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-09 07:54:21.041442 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-09 07:54:21.041446 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-09 07:54:21.041450 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-04-09 07:54:21.041454 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-09 07:54:21.041472 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-09 07:54:21.041487 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-09 07:54:21.041491 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-09 07:54:21.041495 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-09 07:54:21.041499 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-09 07:54:21.041505 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-09 07:54:21.041509 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-09 07:54:21.041513 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-09 07:54:21.041517 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-09 07:54:21.041525 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-09 07:54:21.041529 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-09T04:16:32.000000 | 2026-04-09 07:54:21.041535 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-09 07:54:21.041540 | orchestrator | | accessIPv4 | | 2026-04-09 07:54:21.041544 | orchestrator | | accessIPv6 | | 2026-04-09 07:54:21.041547 | orchestrator | | addresses | test-2=192.168.112.137, 192.168.201.214 | 2026-04-09 07:54:21.041551 | orchestrator | | config_drive | | 2026-04-09 07:54:21.041556 | orchestrator | | created | 2026-04-09T04:16:06Z | 2026-04-09 07:54:21.041560 | orchestrator | | description | None | 2026-04-09 07:54:21.041563 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-09 07:54:21.041611 | orchestrator | | hostId | f57f87be6086d84d015b9887a3e67d1e2733d74d53d33b9070d2b1e6 | 2026-04-09 07:54:21.041618 | orchestrator | | host_status | None | 2026-04-09 07:54:21.041626 | orchestrator | | id | 9e5b4591-beda-4eff-9b2d-a5e9a114870a | 2026-04-09 07:54:21.041631 | orchestrator | | image | N/A (booted from volume) | 2026-04-09 07:54:21.041635 | orchestrator | | key_name | test | 2026-04-09 07:54:21.041639 | orchestrator | | locked | False | 2026-04-09 07:54:21.041645 | orchestrator | | locked_reason | None | 2026-04-09 07:54:21.041649 | orchestrator | | name | test-2 | 2026-04-09 07:54:21.041653 | orchestrator | | pinned_availability_zone | None | 2026-04-09 07:54:21.041661 | orchestrator | | progress | 0 | 2026-04-09 07:54:21.041665 | orchestrator | | project_id | ca7e15a8f29a424baf84073e4157a711 | 2026-04-09 07:54:21.041669 | orchestrator | | properties | hostname='test-2' | 2026-04-09 07:54:21.041676 | orchestrator | | security_groups | name='ssh' | 2026-04-09 07:54:21.041680 | orchestrator | | | name='icmp' | 2026-04-09 07:54:21.041684 | orchestrator | | server_groups | None | 2026-04-09 07:54:21.041688 | orchestrator | | status | ACTIVE | 2026-04-09 07:54:21.041695 | orchestrator | | tags | test | 2026-04-09 07:54:21.041699 | orchestrator | | trusted_image_certificates | None | 2026-04-09 07:54:21.041706 | orchestrator | | updated | 2026-04-09T07:53:07Z | 2026-04-09 07:54:21.041710 | orchestrator | | user_id | 6835449df2ee44bd8d6d112412f3f2ea | 2026-04-09 07:54:21.041714 | orchestrator | | volumes_attached | delete_on_termination='True', id='ee4965c4-28f4-460c-80fe-7433de8c29bd' | 2026-04-09 07:54:21.044877 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 07:54:21.309122 | orchestrator | + openstack --os-cloud test server show test-3 2026-04-09 07:54:24.309251 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 07:54:24.309365 | orchestrator | | Field | Value | 2026-04-09 07:54:24.309381 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 07:54:24.309393 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-09 07:54:24.309422 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-09 07:54:24.309454 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-09 07:54:24.309465 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-04-09 07:54:24.309475 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-09 07:54:24.309485 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-09 07:54:24.309539 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-09 07:54:24.309553 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-09 07:54:24.309563 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-09 07:54:24.309574 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-09 07:54:24.309589 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-09 07:54:24.309607 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-09 07:54:24.309617 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-09 07:54:24.309627 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-09 07:54:24.309637 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-09 07:54:24.309647 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-09T04:16:32.000000 | 2026-04-09 07:54:24.309665 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-09 07:54:24.309676 | orchestrator | | accessIPv4 | | 2026-04-09 07:54:24.309686 | orchestrator | | accessIPv6 | | 2026-04-09 07:54:24.309696 | orchestrator | | addresses | test-2=192.168.112.125, 192.168.201.151 | 2026-04-09 07:54:24.309711 | orchestrator | | config_drive | | 2026-04-09 07:54:24.309729 | orchestrator | | created | 2026-04-09T04:16:08Z | 2026-04-09 07:54:24.309739 | orchestrator | | description | None | 2026-04-09 07:54:24.309752 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-09 07:54:24.309764 | orchestrator | | hostId | f57f87be6086d84d015b9887a3e67d1e2733d74d53d33b9070d2b1e6 | 2026-04-09 07:54:24.309776 | orchestrator | | host_status | None | 2026-04-09 07:54:24.309794 | orchestrator | | id | a4932245-1532-4210-8014-5f31e894606e | 2026-04-09 07:54:24.309806 | orchestrator | | image | N/A (booted from volume) | 2026-04-09 07:54:24.309818 | orchestrator | | key_name | test | 2026-04-09 07:54:24.309830 | orchestrator | | locked | False | 2026-04-09 07:54:24.309853 | orchestrator | | locked_reason | None | 2026-04-09 07:54:24.309864 | orchestrator | | name | test-3 | 2026-04-09 07:54:24.309876 | orchestrator | | pinned_availability_zone | None | 2026-04-09 07:54:24.309888 | orchestrator | | progress | 0 | 2026-04-09 07:54:24.309900 | orchestrator | | project_id | ca7e15a8f29a424baf84073e4157a711 | 2026-04-09 07:54:24.309912 | orchestrator | | properties | hostname='test-3' | 2026-04-09 07:54:24.309931 | orchestrator | | security_groups | name='ssh' | 2026-04-09 07:54:24.309943 | orchestrator | | | name='icmp' | 2026-04-09 07:54:24.309955 | orchestrator | | server_groups | None | 2026-04-09 07:54:24.309973 | orchestrator | | status | ACTIVE | 2026-04-09 07:54:24.309989 | orchestrator | | tags | test | 2026-04-09 07:54:24.310001 | orchestrator | | trusted_image_certificates | None | 2026-04-09 07:54:24.310067 | orchestrator | | updated | 2026-04-09T07:53:08Z | 2026-04-09 07:54:24.310083 | orchestrator | | user_id | 6835449df2ee44bd8d6d112412f3f2ea | 2026-04-09 07:54:24.310095 | orchestrator | | volumes_attached | delete_on_termination='True', id='837050b0-ef15-43e5-9d37-e3f9710f264f' | 2026-04-09 07:54:24.314092 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 07:54:24.585317 | orchestrator | + openstack --os-cloud test server show test-4 2026-04-09 07:54:27.539600 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 07:54:27.539686 | orchestrator | | Field | Value | 2026-04-09 07:54:27.539714 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 07:54:27.539723 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-09 07:54:27.539742 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-09 07:54:27.539750 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-09 07:54:27.539757 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-04-09 07:54:27.539764 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-09 07:54:27.539770 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-09 07:54:27.539791 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-09 07:54:27.539798 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-09 07:54:27.539805 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-09 07:54:27.539817 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-09 07:54:27.539824 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-09 07:54:27.539883 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-09 07:54:27.539895 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-09 07:54:27.539902 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-09 07:54:27.539909 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-09 07:54:27.539916 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-09T04:16:31.000000 | 2026-04-09 07:54:27.539928 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-09 07:54:27.539936 | orchestrator | | accessIPv4 | | 2026-04-09 07:54:27.539948 | orchestrator | | accessIPv6 | | 2026-04-09 07:54:27.539955 | orchestrator | | addresses | test-3=192.168.112.154, 192.168.202.116 | 2026-04-09 07:54:27.539962 | orchestrator | | config_drive | | 2026-04-09 07:54:27.539972 | orchestrator | | created | 2026-04-09T04:16:09Z | 2026-04-09 07:54:27.539979 | orchestrator | | description | None | 2026-04-09 07:54:27.539986 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-09 07:54:27.539993 | orchestrator | | hostId | e6a4f3ec81f73649a6d605835ca17df687b1765525d632ba291ccfd0 | 2026-04-09 07:54:27.540000 | orchestrator | | host_status | None | 2026-04-09 07:54:27.540012 | orchestrator | | id | ba1519ae-7195-464e-8d49-84a6a2a905b7 | 2026-04-09 07:54:27.540024 | orchestrator | | image | N/A (booted from volume) | 2026-04-09 07:54:27.540031 | orchestrator | | key_name | test | 2026-04-09 07:54:27.540038 | orchestrator | | locked | False | 2026-04-09 07:54:27.540045 | orchestrator | | locked_reason | None | 2026-04-09 07:54:27.540055 | orchestrator | | name | test-4 | 2026-04-09 07:54:27.540062 | orchestrator | | pinned_availability_zone | None | 2026-04-09 07:54:27.540069 | orchestrator | | progress | 0 | 2026-04-09 07:54:27.540076 | orchestrator | | project_id | ca7e15a8f29a424baf84073e4157a711 | 2026-04-09 07:54:27.540083 | orchestrator | | properties | hostname='test-4' | 2026-04-09 07:54:27.540099 | orchestrator | | security_groups | name='ssh' | 2026-04-09 07:54:27.540106 | orchestrator | | | name='icmp' | 2026-04-09 07:54:27.540113 | orchestrator | | server_groups | None | 2026-04-09 07:54:27.540119 | orchestrator | | status | ACTIVE | 2026-04-09 07:54:27.540130 | orchestrator | | tags | test | 2026-04-09 07:54:27.540137 | orchestrator | | trusted_image_certificates | None | 2026-04-09 07:54:27.540170 | orchestrator | | updated | 2026-04-09T07:53:09Z | 2026-04-09 07:54:27.540177 | orchestrator | | user_id | 6835449df2ee44bd8d6d112412f3f2ea | 2026-04-09 07:54:27.540184 | orchestrator | | volumes_attached | delete_on_termination='True', id='62db43ed-06ba-4861-9d62-b2ace55c75c3' | 2026-04-09 07:54:27.544034 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 07:54:27.793240 | orchestrator | + server_ping 2026-04-09 07:54:27.793812 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-09 07:54:27.795378 | orchestrator | ++ tr -d '\r' 2026-04-09 07:54:30.660004 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 07:54:30.660186 | orchestrator | + ping -c3 192.168.112.154 2026-04-09 07:54:30.673648 | orchestrator | PING 192.168.112.154 (192.168.112.154) 56(84) bytes of data. 2026-04-09 07:54:30.673742 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=1 ttl=63 time=7.87 ms 2026-04-09 07:54:31.669483 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=2 ttl=63 time=2.41 ms 2026-04-09 07:54:32.671220 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=3 ttl=63 time=1.99 ms 2026-04-09 07:54:32.671331 | orchestrator | 2026-04-09 07:54:32.671350 | orchestrator | --- 192.168.112.154 ping statistics --- 2026-04-09 07:54:32.671364 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 07:54:32.671375 | orchestrator | rtt min/avg/max/mdev = 1.989/4.087/7.868/2.678 ms 2026-04-09 07:54:32.671764 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 07:54:32.671799 | orchestrator | + ping -c3 192.168.112.116 2026-04-09 07:54:32.683161 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2026-04-09 07:54:32.683236 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=6.64 ms 2026-04-09 07:54:33.680168 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=1.97 ms 2026-04-09 07:54:34.681527 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=2.04 ms 2026-04-09 07:54:34.681632 | orchestrator | 2026-04-09 07:54:34.681648 | orchestrator | --- 192.168.112.116 ping statistics --- 2026-04-09 07:54:34.681661 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 07:54:34.681673 | orchestrator | rtt min/avg/max/mdev = 1.970/3.548/6.638/2.185 ms 2026-04-09 07:54:34.682711 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 07:54:34.682741 | orchestrator | + ping -c3 192.168.112.137 2026-04-09 07:54:34.695195 | orchestrator | PING 192.168.112.137 (192.168.112.137) 56(84) bytes of data. 2026-04-09 07:54:34.695278 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=1 ttl=63 time=8.32 ms 2026-04-09 07:54:35.691595 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=2 ttl=63 time=2.48 ms 2026-04-09 07:54:36.692404 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=3 ttl=63 time=1.70 ms 2026-04-09 07:54:36.692516 | orchestrator | 2026-04-09 07:54:36.692532 | orchestrator | --- 192.168.112.137 ping statistics --- 2026-04-09 07:54:36.692543 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-09 07:54:36.692553 | orchestrator | rtt min/avg/max/mdev = 1.698/4.165/8.323/2.956 ms 2026-04-09 07:54:36.693586 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 07:54:36.693620 | orchestrator | + ping -c3 192.168.112.125 2026-04-09 07:54:36.705234 | orchestrator | PING 192.168.112.125 (192.168.112.125) 56(84) bytes of data. 2026-04-09 07:54:36.705316 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=1 ttl=63 time=7.43 ms 2026-04-09 07:54:37.701983 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=2 ttl=63 time=2.34 ms 2026-04-09 07:54:38.703476 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=3 ttl=63 time=2.22 ms 2026-04-09 07:54:38.703559 | orchestrator | 2026-04-09 07:54:38.703568 | orchestrator | --- 192.168.112.125 ping statistics --- 2026-04-09 07:54:38.703577 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 07:54:38.703584 | orchestrator | rtt min/avg/max/mdev = 2.218/3.993/7.426/2.427 ms 2026-04-09 07:54:38.704201 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 07:54:38.704259 | orchestrator | + ping -c3 192.168.112.181 2026-04-09 07:54:38.713866 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2026-04-09 07:54:38.713897 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=5.32 ms 2026-04-09 07:54:39.713423 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=2.69 ms 2026-04-09 07:54:40.715214 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=1.86 ms 2026-04-09 07:54:40.715302 | orchestrator | 2026-04-09 07:54:40.715319 | orchestrator | --- 192.168.112.181 ping statistics --- 2026-04-09 07:54:40.715332 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 07:54:40.715342 | orchestrator | rtt min/avg/max/mdev = 1.862/3.293/5.323/1.475 ms 2026-04-09 07:54:40.715352 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-04-09 07:54:40.838738 | orchestrator | ok: Runtime: 0:09:15.003866 2026-04-09 07:54:40.877786 | 2026-04-09 07:54:40.878018 | PLAY RECAP 2026-04-09 07:54:40.878163 | orchestrator | ok: 32 changed: 13 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-09 07:54:40.878233 | 2026-04-09 07:54:41.176190 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-04-09 07:54:41.180469 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-09 07:54:42.006472 | 2026-04-09 07:54:42.006677 | PLAY [Post output play] 2026-04-09 07:54:42.024784 | 2026-04-09 07:54:42.024946 | LOOP [stage-output : Register sources] 2026-04-09 07:54:42.096659 | 2026-04-09 07:54:42.097005 | TASK [stage-output : Check sudo] 2026-04-09 07:54:42.928972 | orchestrator | sudo: a password is required 2026-04-09 07:54:43.137138 | orchestrator | ok: Runtime: 0:00:00.016885 2026-04-09 07:54:43.153466 | 2026-04-09 07:54:43.153656 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-09 07:54:43.190233 | 2026-04-09 07:54:43.190495 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-09 07:54:43.258192 | orchestrator | ok 2026-04-09 07:54:43.267343 | 2026-04-09 07:54:43.267549 | LOOP [stage-output : Ensure target folders exist] 2026-04-09 07:54:43.745942 | orchestrator | ok: "docs" 2026-04-09 07:54:43.746300 | 2026-04-09 07:54:43.990206 | orchestrator | ok: "artifacts" 2026-04-09 07:54:44.240627 | orchestrator | ok: "logs" 2026-04-09 07:54:44.257383 | 2026-04-09 07:54:44.257568 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-09 07:54:44.291170 | 2026-04-09 07:54:44.291391 | TASK [stage-output : Make all log files readable] 2026-04-09 07:54:44.582611 | orchestrator | ok 2026-04-09 07:54:44.592347 | 2026-04-09 07:54:44.592486 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-09 07:54:44.627692 | orchestrator | skipping: Conditional result was False 2026-04-09 07:54:44.643879 | 2026-04-09 07:54:44.644056 | TASK [stage-output : Discover log files for compression] 2026-04-09 07:54:44.669390 | orchestrator | skipping: Conditional result was False 2026-04-09 07:54:44.679343 | 2026-04-09 07:54:44.679486 | LOOP [stage-output : Archive everything from logs] 2026-04-09 07:54:44.726124 | 2026-04-09 07:54:44.726316 | PLAY [Post cleanup play] 2026-04-09 07:54:44.735809 | 2026-04-09 07:54:44.735917 | TASK [Set cloud fact (Zuul deployment)] 2026-04-09 07:54:44.798769 | orchestrator | ok 2026-04-09 07:54:44.811180 | 2026-04-09 07:54:44.811317 | TASK [Set cloud fact (local deployment)] 2026-04-09 07:54:44.846112 | orchestrator | skipping: Conditional result was False 2026-04-09 07:54:44.864086 | 2026-04-09 07:54:44.864254 | TASK [Clean the cloud environment] 2026-04-09 07:54:45.581354 | orchestrator | 2026-04-09 07:54:45 - clean up servers 2026-04-09 07:54:46.840199 | orchestrator | 2026-04-09 07:54:46 - testbed-manager 2026-04-09 07:54:46.922527 | orchestrator | 2026-04-09 07:54:46 - testbed-node-3 2026-04-09 07:54:47.012193 | orchestrator | 2026-04-09 07:54:47 - testbed-node-2 2026-04-09 07:54:47.105129 | orchestrator | 2026-04-09 07:54:47 - testbed-node-5 2026-04-09 07:54:47.203207 | orchestrator | 2026-04-09 07:54:47 - testbed-node-1 2026-04-09 07:54:47.294540 | orchestrator | 2026-04-09 07:54:47 - testbed-node-4 2026-04-09 07:54:47.394520 | orchestrator | 2026-04-09 07:54:47 - testbed-node-0 2026-04-09 07:54:47.495217 | orchestrator | 2026-04-09 07:54:47 - clean up keypairs 2026-04-09 07:54:47.518147 | orchestrator | 2026-04-09 07:54:47 - testbed 2026-04-09 07:54:47.545091 | orchestrator | 2026-04-09 07:54:47 - wait for servers to be gone 2026-04-09 07:54:58.539383 | orchestrator | 2026-04-09 07:54:58 - clean up ports 2026-04-09 07:54:58.734087 | orchestrator | 2026-04-09 07:54:58 - 7e736ce5-966e-4891-a8dc-b63cc80967be 2026-04-09 07:54:58.997828 | orchestrator | 2026-04-09 07:54:58 - 917da153-c1e2-4268-b5d4-3a290381f07e 2026-04-09 07:54:59.269160 | orchestrator | 2026-04-09 07:54:59 - 94792bc2-1f7a-4681-be6c-e9d7c2420ae9 2026-04-09 07:54:59.494118 | orchestrator | 2026-04-09 07:54:59 - 9d5b958c-7789-4411-8027-75190f4cedce 2026-04-09 07:54:59.709381 | orchestrator | 2026-04-09 07:54:59 - b889e493-426f-4809-916c-d0b3470a49e1 2026-04-09 07:55:00.204818 | orchestrator | 2026-04-09 07:55:00 - d7608eef-e608-42c6-bf66-d35751b1ec0b 2026-04-09 07:55:00.419280 | orchestrator | 2026-04-09 07:55:00 - f79db8ad-1c4f-4b3a-8d4a-28f1520fcfb1 2026-04-09 07:55:00.631420 | orchestrator | 2026-04-09 07:55:00 - clean up volumes 2026-04-09 07:55:00.769467 | orchestrator | 2026-04-09 07:55:00 - testbed-volume-2-node-base 2026-04-09 07:55:00.807160 | orchestrator | 2026-04-09 07:55:00 - testbed-volume-manager-base 2026-04-09 07:55:00.848796 | orchestrator | 2026-04-09 07:55:00 - testbed-volume-4-node-base 2026-04-09 07:55:00.888431 | orchestrator | 2026-04-09 07:55:00 - testbed-volume-5-node-base 2026-04-09 07:55:00.944725 | orchestrator | 2026-04-09 07:55:00 - testbed-volume-1-node-base 2026-04-09 07:55:00.996202 | orchestrator | 2026-04-09 07:55:00 - testbed-volume-3-node-base 2026-04-09 07:55:01.042193 | orchestrator | 2026-04-09 07:55:01 - testbed-volume-0-node-base 2026-04-09 07:55:01.086311 | orchestrator | 2026-04-09 07:55:01 - testbed-volume-3-node-3 2026-04-09 07:55:01.135327 | orchestrator | 2026-04-09 07:55:01 - testbed-volume-7-node-4 2026-04-09 07:55:01.184032 | orchestrator | 2026-04-09 07:55:01 - testbed-volume-2-node-5 2026-04-09 07:55:01.226629 | orchestrator | 2026-04-09 07:55:01 - testbed-volume-4-node-4 2026-04-09 07:55:01.275760 | orchestrator | 2026-04-09 07:55:01 - testbed-volume-5-node-5 2026-04-09 07:55:01.321475 | orchestrator | 2026-04-09 07:55:01 - testbed-volume-1-node-4 2026-04-09 07:55:01.364926 | orchestrator | 2026-04-09 07:55:01 - testbed-volume-0-node-3 2026-04-09 07:55:01.408370 | orchestrator | 2026-04-09 07:55:01 - testbed-volume-8-node-5 2026-04-09 07:55:01.450649 | orchestrator | 2026-04-09 07:55:01 - testbed-volume-6-node-3 2026-04-09 07:55:01.497266 | orchestrator | 2026-04-09 07:55:01 - disconnect routers 2026-04-09 07:55:01.615490 | orchestrator | 2026-04-09 07:55:01 - testbed 2026-04-09 07:55:02.700859 | orchestrator | 2026-04-09 07:55:02 - clean up subnets 2026-04-09 07:55:02.763676 | orchestrator | 2026-04-09 07:55:02 - subnet-testbed-management 2026-04-09 07:55:02.931357 | orchestrator | 2026-04-09 07:55:02 - clean up networks 2026-04-09 07:55:03.137510 | orchestrator | 2026-04-09 07:55:03 - net-testbed-management 2026-04-09 07:55:03.428192 | orchestrator | 2026-04-09 07:55:03 - clean up security groups 2026-04-09 07:55:03.473992 | orchestrator | 2026-04-09 07:55:03 - testbed-node 2026-04-09 07:55:03.586549 | orchestrator | 2026-04-09 07:55:03 - testbed-management 2026-04-09 07:55:04.254274 | orchestrator | 2026-04-09 07:55:04 - clean up floating ips 2026-04-09 07:55:04.294261 | orchestrator | 2026-04-09 07:55:04 - 81.163.192.191 2026-04-09 07:55:04.666317 | orchestrator | 2026-04-09 07:55:04 - clean up routers 2026-04-09 07:55:04.774339 | orchestrator | 2026-04-09 07:55:04 - testbed 2026-04-09 07:55:05.927139 | orchestrator | ok: Runtime: 0:00:20.497382 2026-04-09 07:55:05.931686 | 2026-04-09 07:55:05.931829 | PLAY RECAP 2026-04-09 07:55:05.931951 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-09 07:55:05.932006 | 2026-04-09 07:55:06.073021 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-09 07:55:06.077106 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-09 07:55:06.839104 | 2026-04-09 07:55:06.839285 | PLAY [Cleanup play] 2026-04-09 07:55:06.856215 | 2026-04-09 07:55:06.856363 | TASK [Set cloud fact (Zuul deployment)] 2026-04-09 07:55:06.924361 | orchestrator | ok 2026-04-09 07:55:06.933927 | 2026-04-09 07:55:06.934104 | TASK [Set cloud fact (local deployment)] 2026-04-09 07:55:06.968899 | orchestrator | skipping: Conditional result was False 2026-04-09 07:55:06.985377 | 2026-04-09 07:55:06.985551 | TASK [Clean the cloud environment] 2026-04-09 07:55:08.071947 | orchestrator | 2026-04-09 07:55:08 - clean up servers 2026-04-09 07:55:08.607186 | orchestrator | 2026-04-09 07:55:08 - clean up keypairs 2026-04-09 07:55:08.626627 | orchestrator | 2026-04-09 07:55:08 - wait for servers to be gone 2026-04-09 07:55:08.671289 | orchestrator | 2026-04-09 07:55:08 - clean up ports 2026-04-09 07:55:08.752735 | orchestrator | 2026-04-09 07:55:08 - clean up volumes 2026-04-09 07:55:08.819975 | orchestrator | 2026-04-09 07:55:08 - disconnect routers 2026-04-09 07:55:08.847960 | orchestrator | 2026-04-09 07:55:08 - clean up subnets 2026-04-09 07:55:08.879017 | orchestrator | 2026-04-09 07:55:08 - clean up networks 2026-04-09 07:55:09.047747 | orchestrator | 2026-04-09 07:55:09 - clean up security groups 2026-04-09 07:55:09.087613 | orchestrator | 2026-04-09 07:55:09 - clean up floating ips 2026-04-09 07:55:09.116981 | orchestrator | 2026-04-09 07:55:09 - clean up routers 2026-04-09 07:55:09.525182 | orchestrator | ok: Runtime: 0:00:01.391795 2026-04-09 07:55:09.529017 | 2026-04-09 07:55:09.529190 | PLAY RECAP 2026-04-09 07:55:09.529331 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-09 07:55:09.529402 | 2026-04-09 07:55:09.676025 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-09 07:55:09.677708 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-09 07:55:10.433823 | 2026-04-09 07:55:10.433999 | PLAY [Base post-fetch] 2026-04-09 07:55:10.450176 | 2026-04-09 07:55:10.450333 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-09 07:55:10.516468 | orchestrator | skipping: Conditional result was False 2026-04-09 07:55:10.531968 | 2026-04-09 07:55:10.532196 | TASK [fetch-output : Set log path for single node] 2026-04-09 07:55:10.590495 | orchestrator | ok 2026-04-09 07:55:10.599061 | 2026-04-09 07:55:10.599210 | LOOP [fetch-output : Ensure local output dirs] 2026-04-09 07:55:11.124784 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/efba89e74a524d7d8e2931de160f209f/work/logs" 2026-04-09 07:55:11.411512 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/efba89e74a524d7d8e2931de160f209f/work/artifacts" 2026-04-09 07:55:11.693392 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/efba89e74a524d7d8e2931de160f209f/work/docs" 2026-04-09 07:55:11.722264 | 2026-04-09 07:55:11.722485 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-09 07:55:12.667351 | orchestrator | changed: .d..t...... ./ 2026-04-09 07:55:12.667830 | orchestrator | changed: All items complete 2026-04-09 07:55:12.667931 | 2026-04-09 07:55:13.382795 | orchestrator | changed: .d..t...... ./ 2026-04-09 07:55:14.171927 | orchestrator | changed: .d..t...... ./ 2026-04-09 07:55:14.203745 | 2026-04-09 07:55:14.203910 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-09 07:55:14.233201 | orchestrator | skipping: Conditional result was False 2026-04-09 07:55:14.236883 | orchestrator | skipping: Conditional result was False 2026-04-09 07:55:14.249166 | 2026-04-09 07:55:14.249375 | PLAY RECAP 2026-04-09 07:55:14.249501 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-09 07:55:14.249540 | 2026-04-09 07:55:14.386760 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-09 07:55:14.389193 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-09 07:55:15.161359 | 2026-04-09 07:55:15.161559 | PLAY [Base post] 2026-04-09 07:55:15.177052 | 2026-04-09 07:55:15.177200 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-09 07:55:16.182395 | orchestrator | changed 2026-04-09 07:55:16.189759 | 2026-04-09 07:55:16.189879 | PLAY RECAP 2026-04-09 07:55:16.189946 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-09 07:55:16.190008 | 2026-04-09 07:55:16.314678 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-09 07:55:16.315751 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-09 07:55:17.122289 | 2026-04-09 07:55:17.122486 | PLAY [Base post-logs] 2026-04-09 07:55:17.133588 | 2026-04-09 07:55:17.133726 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-09 07:55:17.618200 | localhost | changed 2026-04-09 07:55:17.628403 | 2026-04-09 07:55:17.628580 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-09 07:55:17.666554 | localhost | ok 2026-04-09 07:55:17.672868 | 2026-04-09 07:55:17.673021 | TASK [Set zuul-log-path fact] 2026-04-09 07:55:17.701750 | localhost | ok 2026-04-09 07:55:17.717900 | 2026-04-09 07:55:17.718061 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-09 07:55:17.756460 | localhost | ok 2026-04-09 07:55:17.763765 | 2026-04-09 07:55:17.763947 | TASK [upload-logs : Create log directories] 2026-04-09 07:55:18.310248 | localhost | changed 2026-04-09 07:55:18.313647 | 2026-04-09 07:55:18.313768 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-09 07:55:18.844964 | localhost -> localhost | ok: Runtime: 0:00:00.007383 2026-04-09 07:55:18.854880 | 2026-04-09 07:55:18.855094 | TASK [upload-logs : Upload logs to log server] 2026-04-09 07:55:19.430739 | localhost | Output suppressed because no_log was given 2026-04-09 07:55:19.432719 | 2026-04-09 07:55:19.432825 | LOOP [upload-logs : Compress console log and json output] 2026-04-09 07:55:19.489650 | localhost | skipping: Conditional result was False 2026-04-09 07:55:19.495048 | localhost | skipping: Conditional result was False 2026-04-09 07:55:19.507554 | 2026-04-09 07:55:19.507809 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-09 07:55:19.555251 | localhost | skipping: Conditional result was False 2026-04-09 07:55:19.556144 | 2026-04-09 07:55:19.559357 | localhost | skipping: Conditional result was False 2026-04-09 07:55:19.572701 | 2026-04-09 07:55:19.572959 | LOOP [upload-logs : Upload console log and json output]